[GitHub] [hadoop] hadoop-yetus commented on pull request #2073: HADOOP-16998 WASB : NativeAzureFsOutputStream#close() throwing java.l…
hadoop-yetus commented on pull request #2073: URL: https://github.com/apache/hadoop/pull/2073#issuecomment-654641695 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 25m 21s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 23m 10s | trunk passed | | +1 :green_heart: | compile | 0m 36s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 34s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 22s | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | trunk passed | | +1 :green_heart: | shadedclient | 16m 50s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 27s | hadoop-azure in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 26s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 2s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 59s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | the patch passed | | +1 :green_heart: | compile | 0m 32s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 32s | the patch passed | | +1 :green_heart: | compile | 0m 26s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 26s | the patch passed | | +1 :green_heart: | checkstyle | 0m 17s | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 54s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 24s | hadoop-azure in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 1m 5s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 31s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | The patch does not generate ASF License warnings. | | | | 93m 45s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2073 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9fb42898486a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f77bbc2123e | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/5/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/5/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/5/testReport/ | | Max. process+thread count | 414 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/5/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please
[GitHub] [hadoop] fengnanli edited a comment on pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens
fengnanli edited a comment on pull request #2110: URL: https://github.com/apache/hadoop/pull/2110#issuecomment-653946194 Thanks very much @sunchao @goiri @Hexiaoqiao for the detailed review. I have addressed all comments and please give it another look. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] fengnanli commented on pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens
fengnanli commented on pull request #2110: URL: https://github.com/apache/hadoop/pull/2110#issuecomment-654625072 I didn't know I have to click `resolve conversation` to publish the reply. Just resolved all of the comments. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens
sunchao commented on pull request #2110: URL: https://github.com/apache/hadoop/pull/2110#issuecomment-654618574 Hmm @fengnanli it seems not all comments were addressed in the [latest commit](https://github.com/apache/hadoop/pull/2110/commits/eee3bf835215916aaf1632920523d612bee5429f) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anoopsjohn commented on pull request #2073: HADOOP-16998 WASB : NativeAzureFsOutputStream#close() throwing java.l…
anoopsjohn commented on pull request #2073: URL: https://github.com/apache/hadoop/pull/2073#issuecomment-654607050 Sorry I got confused with the order of imports that we follow in Hadoop.. Do we have a code formatter for Eclipse which specify the import order? Based on the line number what Steve mentioned, I believe he mean this "Or u want to move import org.apache.hadoop.classification.InterfaceAudience; (In L30) to L28 so that within org.apache block things are properly imported in order and still keep this org.apache and org.slf as 2 blocks" Let me anyways change that in my PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2121: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri.
hadoop-yetus commented on pull request #2121: URL: https://github.com/apache/hadoop/pull/2121#issuecomment-654595472 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 25m 49s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 40s | trunk passed | | +1 :green_heart: | compile | 21m 53s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 17m 51s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 3m 27s | trunk passed | | +1 :green_heart: | mvnsite | 2m 47s | trunk passed | | +1 :green_heart: | shadedclient | 22m 24s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 36s | hadoop-common in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 41s | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 43s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 3m 12s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 18s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 57s | the patch passed | | +1 :green_heart: | compile | 19m 58s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 19m 58s | the patch passed | | +1 :green_heart: | compile | 17m 22s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 17m 22s | the patch passed | | +1 :green_heart: | checkstyle | 2m 51s | the patch passed | | +1 :green_heart: | mvnsite | 2m 44s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 45s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 36s | hadoop-common in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 41s | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 46s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 5m 33s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 37s | hadoop-common in the patch passed. | | -1 :x: | unit | 128m 15s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | The patch does not generate ASF License warnings. | | | | 331m 22s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics | | | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | | | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2121/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2121 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 16498dff0eeb 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / e820baa6e6f | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2121/3/artifact/out/branch-javadoc-hadoop-c
[GitHub] [hadoop] zhaoyim commented on pull request #2037: HDFS-14984. HDFS setQuota: Error message should be added for invalid …
zhaoyim commented on pull request #2037: URL: https://github.com/apache/hadoop/pull/2037#issuecomment-654568431 fixed style This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zhaoyim commented on a change in pull request #2037: HDFS-14984. HDFS setQuota: Error message should be added for invalid …
zhaoyim commented on a change in pull request #2037: URL: https://github.com/apache/hadoop/pull/2037#discussion_r450583750 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java ## @@ -201,8 +201,19 @@ public void run(Path path) throws IOException { super(conf); CommandFormat c = new CommandFormat(2, Integer.MAX_VALUE); List parameters = c.parse(args, pos); - this.quota = - StringUtils.TraditionalBinaryPrefix.string2long(parameters.remove(0)); + String str = parameters.get(0).trim(); + try { +this.quota = StringUtils.TraditionalBinaryPrefix +.string2long(parameters.remove(0)); + } catch(NumberFormatException e){ +throw new IllegalArgumentException("\"" + str + +"\" is not a valid value for a quota."); + } + if (HdfsConstants.QUOTA_DONT_SET == this.quota) { +System.out.print("WARN: \"" + this.quota + Review comment: Agree with U, change to throw exception. Thanks for confirm! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zhaoyim commented on a change in pull request #2037: HDFS-14984. HDFS setQuota: Error message should be added for invalid …
zhaoyim commented on a change in pull request #2037: URL: https://github.com/apache/hadoop/pull/2037#discussion_r450583750 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java ## @@ -201,8 +201,19 @@ public void run(Path path) throws IOException { super(conf); CommandFormat c = new CommandFormat(2, Integer.MAX_VALUE); List parameters = c.parse(args, pos); - this.quota = - StringUtils.TraditionalBinaryPrefix.string2long(parameters.remove(0)); + String str = parameters.get(0).trim(); + try { +this.quota = StringUtils.TraditionalBinaryPrefix +.string2long(parameters.remove(0)); + } catch(NumberFormatException e){ +throw new IllegalArgumentException("\"" + str + +"\" is not a valid value for a quota."); + } + if (HdfsConstants.QUOTA_DONT_SET == this.quota) { +System.out.print("WARN: \"" + this.quota + Review comment: Agree with U, change to throw exception. Thanks for comfirm! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zhaoyim commented on a change in pull request #2037: HDFS-14984. HDFS setQuota: Error message should be added for invalid …
zhaoyim commented on a change in pull request #2037: URL: https://github.com/apache/hadoop/pull/2037#discussion_r450571072 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java ## @@ -2553,16 +2553,17 @@ void setQuota(String src, long namespaceQuota, long storagespaceQuota) throws IOException { checkOpen(); // sanity check -if ((namespaceQuota <= 0 && - namespaceQuota != HdfsConstants.QUOTA_DONT_SET && - namespaceQuota != HdfsConstants.QUOTA_RESET) || -(storagespaceQuota < 0 && +if (namespaceQuota <= 0 && +namespaceQuota != HdfsConstants.QUOTA_DONT_SET && +namespaceQuota != HdfsConstants.QUOTA_RESET){ + throw new IllegalArgumentException("Invalid values for " + + "namespace quota : " + namespaceQuota); +} +if (storagespaceQuota < 0 && Review comment: @Hexiaoqiao In my understand, the diff is after created a dir, the name quota is already 1 and the name quota count it self, but the space quota can be 0. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri merged pull request #2096: HDFS-15312. Apply umask when creating directory by WebHDFS
goiri merged pull request #2096: URL: https://github.com/apache/hadoop/pull/2096 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] umamaheswararao merged pull request #2121: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri.
umamaheswararao merged pull request #2121: URL: https://github.com/apache/hadoop/pull/2121 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jimmy-zuber-amzn commented on pull request #2113: HADOOP-17105: S3AFS - Do not attempt to resolve symlinks in globStatus
jimmy-zuber-amzn commented on pull request #2113: URL: https://github.com/apache/hadoop/pull/2113#issuecomment-654534599 > patch LGTM. Which endpoint (e.g us-west-2) and what build CLI options did you use? > > we don't need that much detail, though if tests are failing that's good to call out so you can get some assistance debugging. e.g > > [#2076 (comment)](https://github.com/apache/hadoop/pull/2076#issuecomment-649564035) I ran with the following test settings, no tests failed by the way: * Region: us-west-2 * Commands * w/ S3Guard: `mvn -T 1C verify -Dparallel-tests -DtestsThreadCount=6 -Ds3guard -Ddynamo -Dauth` * without: `mvn clean verify -Dparallel-tests` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] umamaheswararao commented on pull request #2121: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri.
umamaheswararao commented on pull request #2121: URL: https://github.com/apache/hadoop/pull/2121#issuecomment-654533419 Thank you @virajith for the review! I have addressed your comments, please take a look at it. Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liuml07 commented on pull request #2073: HADOOP-16998 WASB : NativeAzureFsOutputStream#close() throwing java.l…
liuml07 commented on pull request #2073: URL: https://github.com/apache/hadoop/pull/2073#issuecomment-654511754 This is still open. @steveloughran do you have a chance to look at the latest code and also question @anoopsjohn has? Thanks, This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao merged pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
sunchao merged pull request #2080: URL: https://github.com/apache/hadoop/pull/2080 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
sunchao commented on pull request #2080: URL: https://github.com/apache/hadoop/pull/2080#issuecomment-654509964 Merged. Thanks @NickyYe for the contribution and @Hexiaoqiao for helping review! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2121: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri.
umamaheswararao commented on a change in pull request #2121: URL: https://github.com/apache/hadoop/pull/2121#discussion_r450524706 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java ## @@ -85,9 +87,14 @@ * Op3: Create file s3a://bucketA/salesDB/dbfile will go to * s3a://bucketA/salesDB/dbfile * - * Note: In ViewFileSystemOverloadScheme, by default the mount links will be + * Note: + * (1) In ViewFileSystemOverloadScheme, by default the mount links will be * represented as non-symlinks. If you want to change this behavior, please see * {@link ViewFileSystem#listStatus(Path)} + * (2) In ViewFileSystemOverloadScheme, the initialized uri's hostname only will Review comment: Yes, I reread that. Fixed. Thank you! ## File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md ## @@ -28,7 +28,11 @@ View File System Overload Scheme ### Details -The View File System Overload Scheme is an extension to the View File System. This will allow users to continue to use their existing fs.defaultFS configured scheme or any new scheme name instead of using scheme `viewfs`. Mount link configurations key, value formats are same as in [ViewFS Guide](./ViewFs.html). If a user wants to continue use the same fs.defaultFS and wants to have more mount points, then mount link configurations should have the current fs.defaultFS authority name as mount table name. Example if fs.defaultFS is `hdfs://mycluster`, then the mount link configuration key name should be like in the following format `fs.viewfs.mounttable.*mycluster*.link.`. We will discuss more example configurations in following sections. +The View File System Overload Scheme is an extension to the View File System. This will allow users to continue to use their existing fs.defaultFS configured scheme or any new scheme name instead of using scheme `viewfs`. +Mount link configurations key, value formats are same as in [ViewFS Guide](./ViewFs.html). +If a user wants to continue use the same fs.defaultFS and wants to have more mount points, then mount link configurations should have the ViewFileSystemOverloadScheme initialized uri's hostname as the mount table name. +Example if fs.defaultFS is `hdfs://mycluster`, then the mount link configuration key name should be like in the following format `fs.viewfs.mounttable.*mycluster*.link.`. +Even if the initialized fs uri has hostname:port, it will simply ignore the port number and considers only hostname as mount table name. We will discuss more example configurations in following sections. Review comment: done. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java ## @@ -215,7 +214,7 @@ public void testSafeModeWithWrongFS() throws Exception { */ @Test public void testSafeModeShouldFailOnLocalTargetFS() throws Exception { -addMountLinks(defaultFSURI.getAuthority(), new String[] {LOCAL_FOLDER }, +addMountLinks(defaultFSURI.getHost(), new String[] {LOCAL_FOLDER }, Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajith commented on a change in pull request #2121: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri.
virajith commented on a change in pull request #2121: URL: https://github.com/apache/hadoop/pull/2121#discussion_r450505511 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFileSystemOverloadSchemeWithDFSAdmin.java ## @@ -215,7 +214,7 @@ public void testSafeModeWithWrongFS() throws Exception { */ @Test public void testSafeModeShouldFailOnLocalTargetFS() throws Exception { -addMountLinks(defaultFSURI.getAuthority(), new String[] {LOCAL_FOLDER }, +addMountLinks(defaultFSURI.getHost(), new String[] {LOCAL_FOLDER }, Review comment: nit: remove space after `LOCAL_FOLDER` ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystemOverloadScheme.java ## @@ -85,9 +87,14 @@ * Op3: Create file s3a://bucketA/salesDB/dbfile will go to * s3a://bucketA/salesDB/dbfile * - * Note: In ViewFileSystemOverloadScheme, by default the mount links will be + * Note: + * (1) In ViewFileSystemOverloadScheme, by default the mount links will be * represented as non-symlinks. If you want to change this behavior, please see * {@link ViewFileSystem#listStatus(Path)} + * (2) In ViewFileSystemOverloadScheme, the initialized uri's hostname only will Review comment: nit: "the initialized uri's hostname only.." -> "only the initialized uri's hostname.." ## File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFsOverloadScheme.md ## @@ -28,7 +28,11 @@ View File System Overload Scheme ### Details -The View File System Overload Scheme is an extension to the View File System. This will allow users to continue to use their existing fs.defaultFS configured scheme or any new scheme name instead of using scheme `viewfs`. Mount link configurations key, value formats are same as in [ViewFS Guide](./ViewFs.html). If a user wants to continue use the same fs.defaultFS and wants to have more mount points, then mount link configurations should have the current fs.defaultFS authority name as mount table name. Example if fs.defaultFS is `hdfs://mycluster`, then the mount link configuration key name should be like in the following format `fs.viewfs.mounttable.*mycluster*.link.`. We will discuss more example configurations in following sections. +The View File System Overload Scheme is an extension to the View File System. This will allow users to continue to use their existing fs.defaultFS configured scheme or any new scheme name instead of using scheme `viewfs`. +Mount link configurations key, value formats are same as in [ViewFS Guide](./ViewFs.html). +If a user wants to continue use the same fs.defaultFS and wants to have more mount points, then mount link configurations should have the ViewFileSystemOverloadScheme initialized uri's hostname as the mount table name. +Example if fs.defaultFS is `hdfs://mycluster`, then the mount link configuration key name should be like in the following format `fs.viewfs.mounttable.*mycluster*.link.`. +Even if the initialized fs uri has hostname:port, it will simply ignore the port number and considers only hostname as mount table name. We will discuss more example configurations in following sections. Review comment: nit: "considers only hostname.." -> "only consider the hostname.." This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2121: HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri.
hadoop-yetus commented on pull request #2121: URL: https://github.com/apache/hadoop/pull/2121#issuecomment-654464646 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 8s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 5s | trunk passed | | +1 :green_heart: | compile | 20m 43s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 17m 11s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 2m 48s | trunk passed | | +1 :green_heart: | mvnsite | 2m 48s | trunk passed | | +1 :green_heart: | shadedclient | 22m 1s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 39s | hadoop-common in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 43s | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 53s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 3m 30s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 54s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 3s | the patch passed | | +1 :green_heart: | compile | 21m 2s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 21m 2s | the patch passed | | +1 :green_heart: | compile | 18m 29s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 18m 29s | the patch passed | | +1 :green_heart: | checkstyle | 2m 50s | the patch passed | | +1 :green_heart: | mvnsite | 2m 45s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 35s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 36s | hadoop-common in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 42s | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 43s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 5m 42s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 29s | hadoop-common in the patch passed. | | -1 :x: | unit | 114m 22s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | The patch does not generate ASF License warnings. | | | | 294m 48s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2121/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2121 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux dfcd4689f5a9 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 834372f4040 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2121/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2121/2/artifact/out/bra
[GitHub] [hadoop] hadoop-yetus commented on pull request #1861: HADOOP-13230. Optionally retain directory markers
hadoop-yetus commented on pull request #1861: URL: https://github.com/apache/hadoop/pull/1861#issuecomment-654458573 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 8s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 15 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 30s | trunk passed | | +1 :green_heart: | compile | 20m 31s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 17m 15s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 2m 53s | trunk passed | | +1 :green_heart: | mvnsite | 2m 6s | trunk passed | | +1 :green_heart: | shadedclient | 20m 46s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 36s | hadoop-common in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 35s | hadoop-aws in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 27s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 7s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 11s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 26s | hadoop-aws in the patch failed. | | -1 :x: | compile | 18m 45s | root in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javac | 18m 45s | root in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | compile | 16m 22s | root in the patch failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | -1 :x: | javac | 16m 22s | root in the patch failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | -0 :warning: | checkstyle | 2m 51s | root: The patch generated 60 new + 64 unchanged - 1 fixed = 124 total (was 65) | | -1 :x: | mvnsite | 0m 41s | hadoop-aws in the patch failed. | | -1 :x: | whitespace | 0m 0s | The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 34s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 35s | hadoop-common in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 35s | hadoop-aws in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 25s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | -1 :x: | findbugs | 0m 39s | hadoop-aws in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 9m 20s | hadoop-common in the patch passed. | | -1 :x: | unit | 0m 40s | hadoop-aws in the patch failed. | | +1 :green_heart: | asflicense | 0m 45s | The patch does not generate ASF License warnings. | | | | 165m 28s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.security.TestFixKerberosTicketOrder | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/18/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1861 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 8ccf523e390a 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 834372f4040 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/18/artifact/out/branch-java
[jira] [Commented] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer
[ https://issues.apache.org/jira/browse/HADOOP-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152243#comment-17152243 ] Krzysztof Adamski commented on HADOOP-17112: Thank you. Let's see if we can handle it and will get back. > whitespace not allowed in paths when saving files to s3a via committer > -- > > Key: HADOOP-17112 > URL: https://issues.apache.org/jira/browse/HADOOP-17112 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Krzysztof Adamski >Priority: Major > Attachments: image-2020-07-03-16-08-52-340.png > > > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > !image-2020-07-03-16-08-52-340.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2069: HADOOP-16830. IOStatistics API.
hadoop-yetus commented on pull request #2069: URL: https://github.com/apache/hadoop/pull/2069#issuecomment-654374675 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 31 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 3s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 35s | trunk passed | | +1 :green_heart: | compile | 20m 32s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 17m 20s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 2m 57s | trunk passed | | +1 :green_heart: | mvnsite | 2m 5s | trunk passed | | +1 :green_heart: | shadedclient | 20m 48s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 36s | hadoop-common in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 37s | hadoop-aws in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 27s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 7s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 14s | trunk passed | | -0 :warning: | patch | 1m 26s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 24s | the patch passed | | +1 :green_heart: | compile | 19m 49s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | -1 :x: | javac | 19m 49s | root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 1 new + 1965 unchanged - 1 fixed = 1966 total (was 1966) | | +1 :green_heart: | compile | 17m 20s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | -1 :x: | javac | 17m 20s | root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 1859 unchanged - 1 fixed = 1860 total (was 1860) | | -0 :warning: | checkstyle | 2m 53s | root: The patch generated 15 new + 202 unchanged - 23 fixed = 217 total (was 225) | | +1 :green_heart: | mvnsite | 2m 8s | the patch passed | | -1 :x: | whitespace | 0m 0s | The patch has 12 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 23s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 36s | hadoop-common in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 36s | hadoop-aws in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 55s | hadoop-common in the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | +1 :green_heart: | javadoc | 0m 33s | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 0 new + 0 unchanged - 4 fixed = 0 total (was 4) | | +1 :green_heart: | findbugs | 3m 34s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 9m 28s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 1m 34s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | The patch does not generate ASF License warnings. | | | | 170m 47s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestLocalFileSystem | | | hadoop.fs.statistics.TestDynamicIOStatistics | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/ap
[GitHub] [hadoop] jimmy-zuber-amzn commented on a change in pull request #2113: HADOOP-17105: S3AFS - Do not attempt to resolve symlinks in globStatus
jimmy-zuber-amzn commented on a change in pull request #2113: URL: https://github.com/apache/hadoop/pull/2113#discussion_r450360857 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileOperationCost.java ## @@ -574,4 +574,48 @@ public void testCreateCost() throws Throwable { } } + + @Test + public void testCostOfGlobStatus() throws Throwable { +describe("Test globStatus has expected cost"); +S3AFileSystem fs = getFileSystem(); +assume("Unguarded FS only", !fs.hasMetadataStore()); + +Path basePath = path("testCostOfGlobStatus/nextFolder/"); + +// create a bunch of files +int filesToCreate = 10; +for (int i = 0; i < filesToCreate; i++) { + try (FSDataOutputStream out = fs.create(basePath.suffix("/" + i))) { +verifyOperationCount(1, 1); + } +} + +fs.globStatus(basePath.suffix("/*")); +// 2 head + 1 list from getFileStatus on path, +// plus 1 list to match the glob pattern +verifyOperationCount(2, 2); + } + + @Test + public void testCostOfGlobStatusNoSymlinkResolution() throws Throwable { Review comment: So these tests two different things, a directory with multiple objects and a directory with one object. The directory with a single object is the special case that triggers attempted symlink resolution, so I wanted to carve out that special case in its own test. The multiple-objects-in-a-directory test is a general test that it felt like globStatus should have, whereas the second one was specifically made to catch the regression. If you don't think that the multiple objects test is justified, I can remove it. Thoughts? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #2037: HDFS-14984. HDFS setQuota: Error message should be added for invalid …
Hexiaoqiao commented on a change in pull request #2037: URL: https://github.com/apache/hadoop/pull/2037#discussion_r450339697 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java ## @@ -201,8 +201,19 @@ public void run(Path path) throws IOException { super(conf); CommandFormat c = new CommandFormat(2, Integer.MAX_VALUE); List parameters = c.parse(args, pos); - this.quota = - StringUtils.TraditionalBinaryPrefix.string2long(parameters.remove(0)); + String str = parameters.get(0).trim(); + try { +this.quota = StringUtils.TraditionalBinaryPrefix +.string2long(parameters.remove(0)); + } catch(NumberFormatException e){ +throw new IllegalArgumentException("\"" + str + +"\" is not a valid value for a quota."); + } + if (HdfsConstants.QUOTA_DONT_SET == this.quota) { +System.out.print("WARN: \"" + this.quota + Review comment: If that I think it is OK to throw exception that avoid send RPC request to NameNode if parameter is invalid which client has checked. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #2037: HDFS-14984. HDFS setQuota: Error message should be added for invalid …
Hexiaoqiao commented on a change in pull request #2037: URL: https://github.com/apache/hadoop/pull/2037#discussion_r450336435 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java ## @@ -2553,16 +2553,17 @@ void setQuota(String src, long namespaceQuota, long storagespaceQuota) throws IOException { checkOpen(); // sanity check -if ((namespaceQuota <= 0 && - namespaceQuota != HdfsConstants.QUOTA_DONT_SET && - namespaceQuota != HdfsConstants.QUOTA_RESET) || -(storagespaceQuota < 0 && +if (namespaceQuota <= 0 && +namespaceQuota != HdfsConstants.QUOTA_DONT_SET && +namespaceQuota != HdfsConstants.QUOTA_RESET){ + throw new IllegalArgumentException("Invalid values for " + + "namespace quota : " + namespaceQuota); +} +if (storagespaceQuota < 0 && Review comment: Thanks @zhaoyim for your explanation. But I am still confused why this condition is different with `namespacequota`? Sorry I do not find user manuals about set*Quota. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.
hadoop-yetus commented on pull request #2038: URL: https://github.com/apache/hadoop/pull/2038#issuecomment-654320457 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 23s | trunk passed | | +1 :green_heart: | compile | 0m 43s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 37s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 28s | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | trunk passed | | +1 :green_heart: | shadedclient | 15m 4s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 32s | hadoop-aws in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 30s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 1s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 59s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | the patch passed | | +1 :green_heart: | compile | 0m 33s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 33s | the patch passed | | +1 :green_heart: | compile | 0m 29s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 29s | the patch passed | | +1 :green_heart: | checkstyle | 0m 19s | hadoop-tools/hadoop-aws: The patch generated 0 new + 15 unchanged - 1 fixed = 15 total (was 16) | | +1 :green_heart: | mvnsite | 0m 31s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 32s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 28s | hadoop-aws in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 1m 1s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 21s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | The patch does not generate ASF License warnings. | | | | 61m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2038 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f855abc44374 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 639acb6d892 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/5/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/5/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/5/testReport/ | | Max. process+thread count | 452 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/5/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated messa
[jira] [Commented] (HADOOP-17081) MetricsSystem doesn't start the sink adapters on restart
[ https://issues.apache.org/jira/browse/HADOOP-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152107#comment-17152107 ] Hudson commented on HADOOP-17081: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18411 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18411/]) HADOOP-17081. MetricsSystem doesn't start the sink adapters on restart (github: rev 2f500e4635ea4347a55693b1a10a4a4465fe5fac) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java > MetricsSystem doesn't start the sink adapters on restart > > > Key: HADOOP-17081 > URL: https://issues.apache.org/jira/browse/HADOOP-17081 > Project: Hadoop Common > Issue Type: Bug > Components: metrics > Environment: NA >Reporter: Madhusoodan >Assignee: Madhusoodan >Priority: Minor > Fix For: 3.2.2, 3.3.1 > > > In HBase we use dynamic metrics and when a metric is removed, we have to > refresh the JMX beans, since there is no API from Java to do it, a hack like > stopping the metrics system and restarting it was used (Read the comment on > the class > [https://github.com/mmpataki/hbase/blob/master/hbase-hadoop-compat/src/main/java/org/apache/hadoop/metrics2/impl/JmxCacheBuster.java]) > > It calls the below APIs in the same order > MetricsSystem.stop > MetricsSystem.start > > MetricsSystem.stop stops all the SinkAdapters, *but doesn't remove them from > the sink list* (allSinks is the variable). When the metrics system is started > again, *it is assumed that the SinkAdapters are restarted, but they are not* > due to the check done in the beginning of the function register. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17102) Add checkstyle rule to prevent further usage of Guava classes
[ https://issues.apache.org/jira/browse/HADOOP-17102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152106#comment-17152106 ] Ahmed Hussein commented on HADOOP-17102: Thanks [~ayushtkn] ! That's a good point. I think this checkstyle is necessary because we do not want future commits to re-introduce the guava classes that have been replaced. > Add checkstyle rule to prevent further usage of Guava classes > - > > Key: HADOOP-17102 > URL: https://issues.apache.org/jira/browse/HADOOP-17102 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, precommit >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17102.001.patch, HADOOP-17102.002.patch > > > We should have precommit rules to prevent further usage of Guava classes that > are available in Java8+ > A list replacing Guava APIs with java8 features: > {code:java} > com.google.common.io.BaseEncoding#base64()java.util.Base64 > com.google.common.io.BaseEncoding#base64Url() java.util.Base64 > com.google.common.base.Joiner.on() > java.lang.String#join() or > >java.util.stream.Collectors#joining() > com.google.common.base.Optional#of() java.util.Optional#of() > com.google.common.base.Optional#absent() > java.util.Optional#empty() > com.google.common.base.Optional#fromNullable() > java.util.Optional#ofNullable() > com.google.common.base.Optional > java.util.Optional > com.google.common.base.Predicate > java.util.function.Predicate > com.google.common.base.Function > java.util.function.Function > com.google.common.base.Supplier > java.util.function.Supplier > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao merged pull request #2119: HDFS-15451. Do not discard non-initial block report for provided storage
Hexiaoqiao merged pull request #2119: URL: https://github.com/apache/hadoop/pull/2119 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #2097: HADOOP-17088. Failed to load Xinclude files with relative path in cas…
steveloughran commented on a change in pull request #2097: URL: https://github.com/apache/hadoop/pull/2097#discussion_r450305929 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java ## @@ -1062,6 +1062,38 @@ public void testRelativeIncludes() throws Exception { new File(new File(relConfig).getParent()).delete(); } + @Test + public void testRelativeIncludesWithLoadingViaUri() throws Exception { +tearDown(); +File configFile = new File("./tmp/test-config.xml"); +File configFile2 = new File("./tmp/test-config2.xml"); + +new File(configFile.getParent()).mkdirs(); +out = new BufferedWriter(new FileWriter(configFile2)); +startConfig(); +appendProperty("a", "b"); +endConfig(); + +out = new BufferedWriter(new FileWriter(configFile)); +startConfig(); +// Add the relative path instead of the absolute one. +startInclude(configFile2.getName()); +endInclude(); +appendProperty("c", "d"); +endConfig(); + +// verify that the includes file contains all properties +Path fileResource = new Path(configFile.toURI()); +conf.addResource(fileResource); +assertEquals(conf.get("a"), "b"); Review comment: exactly. Because Junit reports arg 1 as the 'expected value' in the exceptions it raises...if they are the wrong way round the error is misleading and it takes longer to debug. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2089: HADOOP-17081. MetricsSystem doesn't start the sink adapters on restart
steveloughran commented on pull request #2089: URL: https://github.com/apache/hadoop/pull/2089#issuecomment-654309910 and 3.2 BTW, if you are looking at metrics, why not look at #2069 . That's not about publishing externally, but it is designed to enable apps to collect statistics in detail about an instance of an IO object (stream etc) for collation This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17081) MetricsSystem doesn't start the sink adapters on restart
[ https://issues.apache.org/jira/browse/HADOOP-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17081. - Fix Version/s: 3.3.1 3.2.2 Resolution: Fixed +1 -committed to branches 3.2, 3.3 and trunk. Thanks! > MetricsSystem doesn't start the sink adapters on restart > > > Key: HADOOP-17081 > URL: https://issues.apache.org/jira/browse/HADOOP-17081 > Project: Hadoop Common > Issue Type: Bug > Components: metrics > Environment: NA >Reporter: Madhusoodan >Assignee: Madhusoodan >Priority: Minor > Fix For: 3.2.2, 3.3.1 > > > In HBase we use dynamic metrics and when a metric is removed, we have to > refresh the JMX beans, since there is no API from Java to do it, a hack like > stopping the metrics system and restarting it was used (Read the comment on > the class > [https://github.com/mmpataki/hbase/blob/master/hbase-hadoop-compat/src/main/java/org/apache/hadoop/metrics2/impl/JmxCacheBuster.java]) > > It calls the below APIs in the same order > MetricsSystem.stop > MetricsSystem.start > > MetricsSystem.stop stops all the SinkAdapters, *but doesn't remove them from > the sink list* (allSinks is the variable). When the metrics system is started > again, *it is assumed that the SinkAdapters are restarted, but they are not* > due to the check done in the beginning of the function register. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17081) MetricsSystem doesn't start the sink adapters on restart
[ https://issues.apache.org/jira/browse/HADOOP-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-17081: --- Assignee: Madhusoodan > MetricsSystem doesn't start the sink adapters on restart > > > Key: HADOOP-17081 > URL: https://issues.apache.org/jira/browse/HADOOP-17081 > Project: Hadoop Common > Issue Type: Bug > Components: metrics > Environment: NA >Reporter: Madhusoodan >Assignee: Madhusoodan >Priority: Minor > > In HBase we use dynamic metrics and when a metric is removed, we have to > refresh the JMX beans, since there is no API from Java to do it, a hack like > stopping the metrics system and restarting it was used (Read the comment on > the class > [https://github.com/mmpataki/hbase/blob/master/hbase-hadoop-compat/src/main/java/org/apache/hadoop/metrics2/impl/JmxCacheBuster.java]) > > It calls the below APIs in the same order > MetricsSystem.stop > MetricsSystem.start > > MetricsSystem.stop stops all the SinkAdapters, *but doesn't remove them from > the sink list* (allSinks is the variable). When the metrics system is started > again, *it is assumed that the SinkAdapters are restarted, but they are not* > due to the check done in the beginning of the function register. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2089: HADOOP-17081. MetricsSystem doesn't start the sink adapters on restart
steveloughran commented on pull request #2089: URL: https://github.com/apache/hadoop/pull/2089#issuecomment-654305900 +1 merged to trunk and about to pull into branch-3.3 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #2089: HADOOP-17081. MetricsSystem doesn't start the sink adapters on restart
steveloughran merged pull request #2089: URL: https://github.com/apache/hadoop/pull/2089 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.
steveloughran commented on pull request #2038: URL: https://github.com/apache/hadoop/pull/2038#issuecomment-654304899 good to hear you are happy. What do others say? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.
steveloughran commented on a change in pull request #2038: URL: https://github.com/apache/hadoop/pull/2038#discussion_r450298495 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -4181,79 +4181,114 @@ public LocatedFileStatus next() throws IOException { Path path = qualify(f); LOG.debug("listFiles({}, {})", path, recursive); try { - // if a status was given, that is used, otherwise - // call getFileStatus, which triggers an existence check - final S3AFileStatus fileStatus = status != null - ? status - : (S3AFileStatus) getFileStatus(path); - if (fileStatus.isFile()) { + // if a status was given and it is a file. + if (status != null && status.isFile()) { // simple case: File LOG.debug("Path is a file"); return new Listing.SingleStatusRemoteIterator( -toLocatedFileStatus(fileStatus)); - } else { -// directory: do a bulk operation -String key = maybeAddTrailingSlash(pathToKey(path)); -String delimiter = recursive ? null : "/"; -LOG.debug("Requesting all entries under {} with delimiter '{}'", -key, delimiter); -final RemoteIterator cachedFilesIterator; -final Set tombstones; -boolean allowAuthoritative = allowAuthoritative(f); -if (recursive) { - final PathMetadata pm = metadataStore.get(path, true); - // shouldn't need to check pm.isDeleted() because that will have - // been caught by getFileStatus above. - MetadataStoreListFilesIterator metadataStoreListFilesIterator = - new MetadataStoreListFilesIterator(metadataStore, pm, - allowAuthoritative); - tombstones = metadataStoreListFilesIterator.listTombstones(); - // if all of the below is true - // - authoritative access is allowed for this metadatastore for this directory, - // - all the directory listings are authoritative on the client - // - the caller does not force non-authoritative access - // return the listing without any further s3 access - if (!forceNonAuthoritativeMS && - allowAuthoritative && - metadataStoreListFilesIterator.isRecursivelyAuthoritative()) { -S3AFileStatus[] statuses = S3Guard.iteratorToStatuses( -metadataStoreListFilesIterator, tombstones); -cachedFilesIterator = listing.createProvidedFileStatusIterator( -statuses, ACCEPT_ALL, acceptor); -return listing.createLocatedFileStatusIterator(cachedFilesIterator); - } - cachedFilesIterator = metadataStoreListFilesIterator; -} else { - DirListingMetadata meta = - S3Guard.listChildrenWithTtl(metadataStore, path, ttlTimeProvider, - allowAuthoritative); - if (meta != null) { -tombstones = meta.listTombstones(); - } else { -tombstones = null; - } - cachedFilesIterator = listing.createProvidedFileStatusIterator( - S3Guard.dirMetaToStatuses(meta), ACCEPT_ALL, acceptor); - if (allowAuthoritative && meta != null && meta.isAuthoritative()) { -// metadata listing is authoritative, so return it directly -return listing.createLocatedFileStatusIterator(cachedFilesIterator); - } +toLocatedFileStatus(status)); + } + // Assuming the path to be a directory + // do a bulk operation. + RemoteIterator listFilesAssumingDir = + getListFilesAssumingDir(path, + recursive, + acceptor, + collectTombstones, + forceNonAuthoritativeMS); + // If there are no list entries present, we + // fallback to file existence check as the path + // can be a file or empty directory. + if (!listFilesAssumingDir.hasNext()) { +final S3AFileStatus fileStatus = (S3AFileStatus) getFileStatus(path); Review comment: yea, but an empty dir is that HEAD marker. So no list, right? Don't worry about it -my dir marker tuning changes things anyway, replacing the HEAD + / with a list. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.
hadoop-yetus removed a comment on pull request #2038: URL: https://github.com/apache/hadoop/pull/2038#issuecomment-650243526 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 18m 46s | trunk passed | | +1 :green_heart: | compile | 0m 42s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 35s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 28s | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | trunk passed | | +1 :green_heart: | shadedclient | 15m 3s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 30s | hadoop-aws in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 30s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 0s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 34s | the patch passed | | +1 :green_heart: | compile | 0m 32s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 32s | the patch passed | | +1 :green_heart: | compile | 0m 27s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 27s | the patch passed | | -0 :warning: | checkstyle | 0m 19s | hadoop-tools/hadoop-aws: The patch generated 1 new + 15 unchanged - 1 fixed = 16 total (was 16) | | +1 :green_heart: | mvnsite | 0m 32s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 36s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 27s | hadoop-aws in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 26s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 1m 1s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 19s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | The patch does not generate ASF License warnings. | | | | 60m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2038 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7c2dcbc53c67 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / e0c1d8a9690 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/4/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/4/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/4/testReport/ | | Max. process+thread count | 454 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/4/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
[jira] [Updated] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer
[ https://issues.apache.org/jira/browse/HADOOP-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17112: Component/s: fs/s3 > whitespace not allowed in paths when saving files to s3a via committer > -- > > Key: HADOOP-17112 > URL: https://issues.apache.org/jira/browse/HADOOP-17112 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Krzysztof Adamski >Priority: Major > Attachments: image-2020-07-03-16-08-52-340.png > > > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > !image-2020-07-03-16-08-52-340.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer
[ https://issues.apache.org/jira/browse/HADOOP-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152089#comment-17152089 ] Steve Loughran commented on HADOOP-17112: - Looks like a marshalling bug in the creation of SinglePendingCommit file in CommitOperations.uploadFileToPendingCommit() path.toString() is used to create the string to save, when it should be toUri.toString There is no way I'm going to go near this code in the next week, and even if I did I would be left trying to chase down a reviewer. Do you fancy having a go at it? A new test should go into ITestCommitOperations and the hadoop-aws patch policy "tell us the AWS region you ran the module's 'mvn verify' suite will apply", I'm afraid. > whitespace not allowed in paths when saving files to s3a via committer > -- > > Key: HADOOP-17112 > URL: https://issues.apache.org/jira/browse/HADOOP-17112 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.2.0 >Reporter: Krzysztof Adamski >Priority: Major > Attachments: image-2020-07-03-16-08-52-340.png > > > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > !image-2020-07-03-16-08-52-340.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on pull request #2122: [HADOOP-17109] Replace Guava base64Url and base64 with Java8+ base64
amahussein commented on pull request #2122: URL: https://github.com/apache/hadoop/pull/2122#issuecomment-654298701 Thanks @dengliming for the patch. - [ ] Can you please list the failing test units? Just to confirm that we went through all of them and enumerating the ones are flaky and the ones could be introduced by our changes. - [ ] Can you please fix the checkstyle warnings? - [ ] I see that there are many base64 implementations used throughout the code.I think in that case we change the Jira title and description to reflect the fact that we are also replace `apache common base`. - [ ] Just to make sure that we have the exact behavior as before, Do you know what are the differences between each of them? Also, any idea about performance between guava, apache common, and java.util? - `com.google.common.io.BaseEncoding#base64` - `com.google.common.io.BaseEncoding#base64Url` - `org.apache.commons.codec.binary.Base64`; - [ ] Can you please add the two classes to the illegal imports in checkstyle.xml? It is already done in HADOOP-17111. It should be something like that: ``` diff --git hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml index 8f3d3f13824..54a59437380 100644 --- hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml +++ hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml @@ -119,7 +119,12 @@ - + + + + + ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17111) Replace Guava Optional with Java8+ Optional
[ https://issues.apache.org/jira/browse/HADOOP-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152045#comment-17152045 ] Ahmed Hussein commented on HADOOP-17111: Thanks you Akira!One is down :) > Replace Guava Optional with Java8+ Optional > --- > > Key: HADOOP-17111 > URL: https://issues.apache.org/jira/browse/HADOOP-17111 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Fix For: 3.3.1, 3.4.0 > > Attachments: HADOOP-17111.001.patch, HADOOP-17111.002.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Optional' in project with mask > '*.java' > Found Occurrences (3 usages found) > org.apache.hadoop.yarn.server.nodemanager (2 usages found) > DefaultContainerExecutor.java (1 usage found) > 71 import com.google.common.base.Optional; > LinuxContainerExecutor.java (1 usage found) > 22 import com.google.common.base.Optional; > org.apache.hadoop.yarn.server.resourcemanager.recovery (1 usage found) > TestZKRMStateStorePerf.java (1 usage found) > 21 import com.google.common.base.Optional; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajith commented on pull request #2119: HDFS-15451. Do not discard non-initial block report for provided storage
virajith commented on pull request #2119: URL: https://github.com/apache/hadoop/pull/2119#issuecomment-654250736 Changes look good. Thanks for working on this @shanyu . This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2123: ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…
hadoop-yetus commented on pull request #2123: URL: https://github.com/apache/hadoop/pull/2123#issuecomment-654233656 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 25m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 31s | trunk passed | | +1 :green_heart: | compile | 0m 34s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 28s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 20s | trunk passed | | +1 :green_heart: | mvnsite | 0m 32s | trunk passed | | +1 :green_heart: | shadedclient | 16m 24s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 26s | hadoop-azure in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 0m 51s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 50s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 23s | the patch passed | | +1 :green_heart: | checkstyle | 0m 16s | the patch passed | | +1 :green_heart: | mvnsite | 0m 26s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 38s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 23s | hadoop-azure in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 0m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 15s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 91m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2123 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux fc5b43c34b3f 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 639acb6d892 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/1/testReport/ | | Max. process+thread count | 312 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/1/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated.
[jira] [Commented] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer
[ https://issues.apache.org/jira/browse/HADOOP-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152001#comment-17152001 ] Krzysztof Adamski commented on HADOOP-17112: Thanks. The code that produces the error {code:java} spark.read.csv('s3a://XXX/XXX/data/test_file1.txt').write.format('csv').save('s3a://XXX/XXX/output path/csv') {code} and stacktrace {code:java} Py4JJavaError Traceback (most recent call last) in > 1 spark.read.csv('s3a://XXX/XXX/data/test_file1.txt').write.format('csv').save('s3a://XXX/XXX/output path/csv') /usr/local/spark-3.0.1-SNAPSHOT-bin-wbaa-yarn/python/lib/pyspark.zip/pyspark/sql/readwriter.py in save(self, path, format, mode, partitionBy, **options) 825 self._jwrite.save() 826 else: --> 827 self._jwrite.save(path) 828 829 @since(1.4) /usr/local/second-app-dir/venvpy3/lib/python3.6/site-packages/py4j/java_gateway.py in call(self, *args) 1303 answer = self.gateway_client.send_command(command) 1304 return_value = get_return_value( -> 1305 answer, self.gateway_client, self.target_id, self.name) 1306 1307 for temp_arg in temp_args: /usr/local/spark-3.0.1-SNAPSHOT-bin-wbaa-yarn/python/lib/pyspark.zip/pyspark/sql/utils.py in deco(*a, **kw) 126 def deco(*a, **kw): 127 try: --> 128 return f(*a, **kw) 129 except py4j.protocol.Py4JJavaError as e: 130 converted = convert_exception(e.java_exception) /usr/local/second-app-dir/venvpy3/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n". --> 328 format(target_id, ".", name), value) 329 else: 330 raise Py4JError( Py4JJavaError: An error occurred while calling o86.save. : org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:226) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:178) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106) at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131) at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175) at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:122) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:121) at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:944) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:944) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:396) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:380) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:269) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalStateException: Cannot parse URI s3a://XXX/XXX/output path/csv/part-1-182a6744-a467-4225-a09e-e2e305a66a4f-c000-application_1592135134673_20377.csv at org.apache.hadoop.fs.s3a.commit.files.SinglePendingCommit.destina
[jira] [Commented] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer
[ https://issues.apache.org/jira/browse/HADOOP-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151991#comment-17151991 ] Steve Loughran commented on HADOOP-17112: - not anything I've worried about...spaces in names is rare enough that we all just skipped testing it. can you do a text stack trace so that I can paste into the IDE. you can replace all the path's non-space, non-numeric chars with some value like X if that keeps things private, and I don't care about your bucket name either > whitespace not allowed in paths when saving files to s3a via committer > -- > > Key: HADOOP-17112 > URL: https://issues.apache.org/jira/browse/HADOOP-17112 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.2.0 >Reporter: Krzysztof Adamski >Priority: Major > Attachments: image-2020-07-03-16-08-52-340.png > > > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > !image-2020-07-03-16-08-52-340.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer
[ https://issues.apache.org/jira/browse/HADOOP-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17112: Parent: HADOOP-16829 Issue Type: Sub-task (was: Bug) > whitespace not allowed in paths when saving files to s3a via committer > -- > > Key: HADOOP-17112 > URL: https://issues.apache.org/jira/browse/HADOOP-17112 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.2.0 >Reporter: Krzysztof Adamski >Priority: Major > Attachments: image-2020-07-03-16-08-52-340.png > > > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > !image-2020-07-03-16-08-52-340.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12572) Update Hadoop's lz4 to r131
[ https://issues.apache.org/jira/browse/HADOOP-12572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151984#comment-17151984 ] Hadoop QA commented on HADOOP-12572: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HADOOP-12572 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-12572 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12772360/HADOOP-12572.001.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/17023/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. > Update Hadoop's lz4 to r131 > --- > > Key: HADOOP-12572 > URL: https://issues.apache.org/jira/browse/HADOOP-12572 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Reporter: Kevin Bowling >Assignee: Kevin Bowling >Priority: Major > Attachments: HADOOP-12572.001.patch > > > Update hadoop's native lz4 copy to r131 versus the current r123 copy. > Release notes for the versions are at https://github.com/Cyan4973/lz4/releases > Noteworthy changes: > * 30% performance improvement for clang > * GCC 4.9+ bug fixes > * New 32/64 bits, little/big endian and strict/efficient align detection > routines (internal) > * Small decompression speed improvement > This is my first Hadoop patch, review/feedback appreciated. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-12572) Update Hadoop's lz4 to r131
[ https://issues.apache.org/jira/browse/HADOOP-12572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151982#comment-17151982 ] lindongdong edited comment on HADOOP-12572 at 7/6/20, 12:23 PM: [~kev009] this is a grate work, why not go on? was (Author: lindongdong): [~ste...@apache.org] this is a grate work, why not go on? > Update Hadoop's lz4 to r131 > --- > > Key: HADOOP-12572 > URL: https://issues.apache.org/jira/browse/HADOOP-12572 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Reporter: Kevin Bowling >Assignee: Kevin Bowling >Priority: Major > Attachments: HADOOP-12572.001.patch > > > Update hadoop's native lz4 copy to r131 versus the current r123 copy. > Release notes for the versions are at https://github.com/Cyan4973/lz4/releases > Noteworthy changes: > * 30% performance improvement for clang > * GCC 4.9+ bug fixes > * New 32/64 bits, little/big endian and strict/efficient align detection > routines (internal) > * Small decompression speed improvement > This is my first Hadoop patch, review/feedback appreciated. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12572) Update Hadoop's lz4 to r131
[ https://issues.apache.org/jira/browse/HADOOP-12572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151982#comment-17151982 ] lindongdong commented on HADOOP-12572: -- [~ste...@apache.org] this is a grate work, why not go on? > Update Hadoop's lz4 to r131 > --- > > Key: HADOOP-12572 > URL: https://issues.apache.org/jira/browse/HADOOP-12572 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Reporter: Kevin Bowling >Assignee: Kevin Bowling >Priority: Major > Attachments: HADOOP-12572.001.patch > > > Update hadoop's native lz4 copy to r131 versus the current r123 copy. > Release notes for the versions are at https://github.com/Cyan4973/lz4/releases > Noteworthy changes: > * 30% performance improvement for clang > * GCC 4.9+ bug fixes > * New 32/64 bits, little/big endian and strict/efficient align detection > routines (internal) > * Small decompression speed improvement > This is my first Hadoop patch, review/feedback appreciated. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17092) ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds
[ https://issues.apache.org/jira/browse/HADOOP-17092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H updated HADOOP-17092: -- Status: Patch Available (was: In Progress) > ABFS: Long waits and unintended retries when multiple threads try to fetch > token using ClientCreds > -- > > Key: HADOOP-17092 > URL: https://issues.apache.org/jira/browse/HADOOP-17092 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Bilahari T H >Priority: Major > Fix For: 3.4.0 > > > Issue reported by DB: > we recently experienced some problems with ABFS driver that highlighted a > possible issue with long hangs following synchronized retries when using the > _ClientCredsTokenProvider_ and calling _AbfsClient.getAccessToken_. We have > seen > [https://github.com/apache/hadoop/pull/1923|https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fhadoop%2Fpull%2F1923&data=02%7c01%7csnvijaya%40microsoft.com%7c7362c5ba4af24a553c4308d807ec459d%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c637268058650442694&sdata=FePBBkEqj5kI2Ty4kNr3a2oJgB8Kvy3NvyRK8NoxyH4%3D&reserved=0], > but it does not directly apply since we are not using a custom token > provider, but instead _ClientCredsTokenProvider_ that ultimately relies on > _AzureADAuthenticator_. > > The problem was that the critical section of getAccessToken, combined with a > possibly redundant retry policy, made jobs hanging for a very long time, > since only one thread at a time could make progress, and this progress > amounted to basically retrying on a failing connection for 30-60 minutes. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-17092) ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds
[ https://issues.apache.org/jira/browse/HADOOP-17092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-17092 started by Bilahari T H. - > ABFS: Long waits and unintended retries when multiple threads try to fetch > token using ClientCreds > -- > > Key: HADOOP-17092 > URL: https://issues.apache.org/jira/browse/HADOOP-17092 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Bilahari T H >Priority: Major > Fix For: 3.4.0 > > > Issue reported by DB: > we recently experienced some problems with ABFS driver that highlighted a > possible issue with long hangs following synchronized retries when using the > _ClientCredsTokenProvider_ and calling _AbfsClient.getAccessToken_. We have > seen > [https://github.com/apache/hadoop/pull/1923|https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fhadoop%2Fpull%2F1923&data=02%7c01%7csnvijaya%40microsoft.com%7c7362c5ba4af24a553c4308d807ec459d%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c637268058650442694&sdata=FePBBkEqj5kI2Ty4kNr3a2oJgB8Kvy3NvyRK8NoxyH4%3D&reserved=0], > but it does not directly apply since we are not using a custom token > provider, but instead _ClientCredsTokenProvider_ that ultimately relies on > _AzureADAuthenticator_. > > The problem was that the critical section of getAccessToken, combined with a > possibly redundant retry policy, made jobs hanging for a very long time, > since only one thread at a time could make progress, and this progress > amounted to basically retrying on a failing connection for 30-60 minutes. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith opened a new pull request #2123: ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…
bilaharith opened a new pull request #2123: URL: https://github.com/apache/hadoop/pull/2123 ABFS: Making AzureADAuthenticator.getToken() throw HttpException if all the retries are failed. This is to indiacate no retry is required at AbfsRestOperation.executeHttpOperation(). Introduced delay inbetween retries. Test cases are not added as the change is in a private static method. *Driver test results using accounts in Central India* mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify *Account with HNS Support* [INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 436, Failures: 0, Errors: 0, Skipped: 74 [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24 *Account without HNS support* [INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 436, Failures: 0, Errors: 0, Skipped: 248 [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17092) ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds
[ https://issues.apache.org/jira/browse/HADOOP-17092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H reassigned HADOOP-17092: - Assignee: Bilahari T H (was: Sneha Vijayarajan) > ABFS: Long waits and unintended retries when multiple threads try to fetch > token using ClientCreds > -- > > Key: HADOOP-17092 > URL: https://issues.apache.org/jira/browse/HADOOP-17092 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sneha Vijayarajan >Assignee: Bilahari T H >Priority: Major > Fix For: 3.4.0 > > > Issue reported by DB: > we recently experienced some problems with ABFS driver that highlighted a > possible issue with long hangs following synchronized retries when using the > _ClientCredsTokenProvider_ and calling _AbfsClient.getAccessToken_. We have > seen > [https://github.com/apache/hadoop/pull/1923|https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fhadoop%2Fpull%2F1923&data=02%7c01%7csnvijaya%40microsoft.com%7c7362c5ba4af24a553c4308d807ec459d%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c637268058650442694&sdata=FePBBkEqj5kI2Ty4kNr3a2oJgB8Kvy3NvyRK8NoxyH4%3D&reserved=0], > but it does not directly apply since we are not using a custom token > provider, but instead _ClientCredsTokenProvider_ that ultimately relies on > _AzureADAuthenticator_. > > The problem was that the critical section of getAccessToken, combined with a > possibly redundant retry policy, made jobs hanging for a very long time, > since only one thread at a time could make progress, and this progress > amounted to basically retrying on a failing connection for 30-60 minutes. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17092) ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds
[ https://issues.apache.org/jira/browse/HADOOP-17092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H updated HADOOP-17092: -- Affects Version/s: 3.3.0 > ABFS: Long waits and unintended retries when multiple threads try to fetch > token using ClientCreds > -- > > Key: HADOOP-17092 > URL: https://issues.apache.org/jira/browse/HADOOP-17092 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Bilahari T H >Priority: Major > Fix For: 3.4.0 > > > Issue reported by DB: > we recently experienced some problems with ABFS driver that highlighted a > possible issue with long hangs following synchronized retries when using the > _ClientCredsTokenProvider_ and calling _AbfsClient.getAccessToken_. We have > seen > [https://github.com/apache/hadoop/pull/1923|https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fhadoop%2Fpull%2F1923&data=02%7c01%7csnvijaya%40microsoft.com%7c7362c5ba4af24a553c4308d807ec459d%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c637268058650442694&sdata=FePBBkEqj5kI2Ty4kNr3a2oJgB8Kvy3NvyRK8NoxyH4%3D&reserved=0], > but it does not directly apply since we are not using a custom token > provider, but instead _ClientCredsTokenProvider_ that ultimately relies on > _AzureADAuthenticator_. > > The problem was that the critical section of getAccessToken, combined with a > possibly redundant retry policy, made jobs hanging for a very long time, > since only one thread at a time could make progress, and this progress > amounted to basically retrying on a failing connection for 30-60 minutes. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2037: HDFS-14984. HDFS setQuota: Error message should be added for invalid …
hadoop-yetus commented on pull request #2037: URL: https://github.com/apache/hadoop/pull/2037#issuecomment-654163674 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 18m 57s | trunk passed | | +1 :green_heart: | compile | 3m 53s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 3m 34s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 1m 2s | trunk passed | | +1 :green_heart: | mvnsite | 2m 10s | trunk passed | | +1 :green_heart: | shadedclient | 17m 2s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 38s | hadoop-hdfs-client in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 37s | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 22s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 2m 55s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 5s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 53s | the patch passed | | +1 :green_heart: | compile | 3m 45s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 3m 45s | the patch passed | | +1 :green_heart: | compile | 3m 24s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 3m 24s | the patch passed | | -0 :warning: | checkstyle | 0m 57s | hadoop-hdfs-project: The patch generated 4 new + 260 unchanged - 0 fixed = 264 total (was 260) | | +1 :green_heart: | mvnsite | 1m 58s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 39s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 36s | hadoop-hdfs-client in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 32s | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 15s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 5m 13s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 2s | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 94m 47s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | The patch does not generate ASF License warnings. | | | | 187m 31s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.cli.TestHDFSCLI | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.hdfs.TestDFSInputStreamBlockLocations | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2037/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2037 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d0f7e0ce2229 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 639acb6d892 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2037/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.7+10-post-Ubunt
[GitHub] [hadoop] steveloughran commented on pull request #1898: HADOOP-16852: Report read-ahead error back
steveloughran commented on pull request #1898: URL: https://github.com/apache/hadoop/pull/1898#issuecomment-654142149 are there plans to backport? If you can cherry pick onto branch-3.3 and do the test run, let me know and I will do the merge. No This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17111) Replace Guava Optional with Java8+ Optional
[ https://issues.apache.org/jira/browse/HADOOP-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17111: --- Fix Version/s: 3.4.0 3.3.1 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk and branch-3.3. Thank you, [~ahussein]! > Replace Guava Optional with Java8+ Optional > --- > > Key: HADOOP-17111 > URL: https://issues.apache.org/jira/browse/HADOOP-17111 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Fix For: 3.3.1, 3.4.0 > > Attachments: HADOOP-17111.001.patch, HADOOP-17111.002.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Optional' in project with mask > '*.java' > Found Occurrences (3 usages found) > org.apache.hadoop.yarn.server.nodemanager (2 usages found) > DefaultContainerExecutor.java (1 usage found) > 71 import com.google.common.base.Optional; > LinuxContainerExecutor.java (1 usage found) > 22 import com.google.common.base.Optional; > org.apache.hadoop.yarn.server.resourcemanager.recovery (1 usage found) > TestZKRMStateStorePerf.java (1 usage found) > 21 import com.google.common.base.Optional; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] NickyYe commented on a change in pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
NickyYe commented on a change in pull request #2080: URL: https://github.com/apache/hadoop/pull/2080#discussion_r450036164 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java ## @@ -1777,6 +1777,43 @@ public void testgetGroupsForUser() throws IOException { assertArrayEquals(group, result); } + @Test + public void testGetCachedDatanodeReport() throws Exception { +final DatanodeInfo[] datanodeReport = +routerProtocol.getDatanodeReport(DatanodeReportType.ALL); + +// We should have 12 nodes in total +assertEquals(12, datanodeReport.length); + +// We should be caching this information +DatanodeInfo[] datanodeReport1 = +routerProtocol.getDatanodeReport(DatanodeReportType.ALL); +assertArrayEquals(datanodeReport1, datanodeReport); + +// Add one datanode + getCluster().getCluster().startDataNodes(getCluster().getCluster().getConfiguration(0), Review comment: Done. Thanks. @sunchao This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17111) Replace Guava Optional with Java8+ Optional
[ https://issues.apache.org/jira/browse/HADOOP-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151839#comment-17151839 ] Hudson commented on HADOOP-17111: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18409 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18409/]) HADOOP-17111. Replace Guava Optional with Java8+ Optional. Contributed (aajisaka: rev 639acb6d8921127cde3174a302f2e3d71b44f052) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStorePerf.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java * (edit) hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml > Replace Guava Optional with Java8+ Optional > --- > > Key: HADOOP-17111 > URL: https://issues.apache.org/jira/browse/HADOOP-17111 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17111.001.patch, HADOOP-17111.002.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Optional' in project with mask > '*.java' > Found Occurrences (3 usages found) > org.apache.hadoop.yarn.server.nodemanager (2 usages found) > DefaultContainerExecutor.java (1 usage found) > 71 import com.google.common.base.Optional; > LinuxContainerExecutor.java (1 usage found) > 22 import com.google.common.base.Optional; > org.apache.hadoop.yarn.server.resourcemanager.recovery (1 usage found) > TestZKRMStateStorePerf.java (1 usage found) > 21 import com.google.common.base.Optional; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
hadoop-yetus commented on pull request #2080: URL: https://github.com/apache/hadoop/pull/2080#issuecomment-654067653 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 18m 50s | trunk passed | | +1 :green_heart: | compile | 0m 41s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 35s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 26s | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | trunk passed | | +1 :green_heart: | shadedclient | 15m 4s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 28s | hadoop-hdfs-rbf in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 34s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 11s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 9s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | the patch passed | | +1 :green_heart: | compile | 0m 31s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 31s | the patch passed | | +1 :green_heart: | compile | 0m 27s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 27s | the patch passed | | +1 :green_heart: | checkstyle | 0m 17s | the patch passed | | +1 :green_heart: | mvnsite | 0m 31s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 42s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 25s | hadoop-hdfs-rbf in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 29s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 1m 11s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 7m 58s | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 68m 30s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2080 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4575909aa2c7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 55a2ae80dc9 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/10/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/10/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/10/testReport/ | | Max. process+thread count | 2916 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/10/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from t
[GitHub] [hadoop] hadoop-yetus commented on pull request #2037: HDFS-14984. HDFS setQuota: Error message should be added for invalid …
hadoop-yetus commented on pull request #2037: URL: https://github.com/apache/hadoop/pull/2037#issuecomment-654062521 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 1s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 1s | trunk passed | | +1 :green_heart: | compile | 3m 55s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 3m 30s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 1m 3s | trunk passed | | +1 :green_heart: | mvnsite | 2m 7s | trunk passed | | +1 :green_heart: | shadedclient | 17m 21s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 39s | hadoop-hdfs-client in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 36s | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 20s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 2m 53s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 3s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 54s | the patch passed | | +1 :green_heart: | compile | 3m 46s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 3m 46s | the patch passed | | +1 :green_heart: | compile | 3m 23s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 3m 23s | the patch passed | | -0 :warning: | checkstyle | 0m 54s | hadoop-hdfs-project: The patch generated 4 new + 260 unchanged - 0 fixed = 264 total (was 260) | | +1 :green_heart: | mvnsite | 1m 51s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 34s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 35s | hadoop-hdfs-client in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 32s | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 14s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 5m 17s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 3s | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 93m 44s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | The patch does not generate ASF License warnings. | | | | 187m 10s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | | hadoop.cli.TestHDFSCLI | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics | | | hadoop.hdfs.TestQuota | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2037/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2037 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d5310787731b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 55a2ae80dc9 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2037/2/artifa
[jira] [Commented] (HADOOP-17111) Replace Guava Optional with Java8+ Optional
[ https://issues.apache.org/jira/browse/HADOOP-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151817#comment-17151817 ] Akira Ajisaka commented on HADOOP-17111: +1, thanks Ahmed. > Replace Guava Optional with Java8+ Optional > --- > > Key: HADOOP-17111 > URL: https://issues.apache.org/jira/browse/HADOOP-17111 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17111.001.patch, HADOOP-17111.002.patch > > > {code:java} > Targets > Occurrences of 'com.google.common.base.Optional' in project with mask > '*.java' > Found Occurrences (3 usages found) > org.apache.hadoop.yarn.server.nodemanager (2 usages found) > DefaultContainerExecutor.java (1 usage found) > 71 import com.google.common.base.Optional; > LinuxContainerExecutor.java (1 usage found) > 22 import com.google.common.base.Optional; > org.apache.hadoop.yarn.server.resourcemanager.recovery (1 usage found) > TestZKRMStateStorePerf.java (1 usage found) > 21 import com.google.common.base.Optional; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org