[GitHub] [hadoop] hadoop-yetus commented on pull request #2098: HDFS-15424. Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"
hadoop-yetus commented on pull request #2098: URL: https://github.com/apache/hadoop/pull/2098#issuecomment-649236373 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 13s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 21s | trunk passed | | +1 :green_heart: | compile | 19m 33s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 16m 58s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | mvnsite | 17m 23s | trunk passed | | +1 :green_heart: | shadedclient | 14m 59s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 31s | root in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 29s | root in trunk failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 20m 9s | the patch passed | | +1 :green_heart: | compile | 20m 6s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 20m 6s | the patch passed | | +1 :green_heart: | compile | 18m 52s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 18m 52s | the patch passed | | +1 :green_heart: | mvnsite | 17m 24s | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | There were no new shellcheck issues. | | +1 :green_heart: | shelldocs | 0m 19s | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 59s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 33s | root in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | -1 :x: | javadoc | 0m 33s | root in the patch failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | ||| _ Other Tests _ | | -1 :x: | unit | 571m 50s | root in the patch passed. | | -1 :x: | asflicense | 1m 48s | The patch generated 1 ASF License warnings. | | | | 761m 35s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestConnectionManager | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestDecommissionWithBackoffMonitor | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.TestDecommission | | | hadoop.hdfs.TestGetFileChecksum | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.namenode.ha.TestHAAppend | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | | | hadoop.yarn.applications.distributedshell.TestDistributedShell | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2098/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2098 | | Optional Tests | dupname asflicense shellcheck shelldocs compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux e061357ed6d3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc |
[jira] [Commented] (HADOOP-17089) WASB: Update azure-storage-java SDK
[ https://issues.apache.org/jira/browse/HADOOP-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144661#comment-17144661 ] Hudson commented on HADOOP-17089: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18378 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18378/]) HADOOP-17089: WASB: Update azure-storage-java SDK Contributed by Thomas (tmarq: rev 4b5b54c73f2fd9146237087a59453e2b5d70f9ed) * (edit) hadoop-project/pom.xml * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java > WASB: Update azure-storage-java SDK > --- > > Key: HADOOP-17089 > URL: https://issues.apache.org/jira/browse/HADOOP-17089 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0, 3.2.0 >Reporter: Thomas Marqardt >Assignee: Thomas Marqardt >Priority: Major > Fix For: 3.3.1 > > > WASB depends on the Azure Storage Java SDK. There is a concurrency bug in > the Azure Storage Java SDK that can cause the results of a list blobs > operation to appear empty. This causes the Filesystem listStatus and similar > APIs to return empty results. This has been seen in Spark work loads when > jobs use more than one executor core. > See [https://github.com/Azure/azure-storage-java/pull/546] for details on the > bug in the Azure Storage SDK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ThomasMarquardt commented on pull request #2099: HADOOP-17089: WASB: Update azure-storage-java SDK
ThomasMarquardt commented on pull request #2099: URL: https://github.com/apache/hadoop/pull/2099#issuecomment-649235262 The javadoc issue is tracked by HADOOP-17085. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17089) WASB: Update azure-storage-java SDK
[ https://issues.apache.org/jira/browse/HADOOP-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Marqardt resolved HADOOP-17089. -- Fix Version/s: 3.3.1 Release Note: Azure WASB bug fix that can cause list results to appear empty. Resolution: Fixed trunk: commit 4b5b54c73f2fd9146237087a59453e2b5d70f9ed Author: Thomas Marquardt Date: Wed Jun 24 18:37:25 2020 + branch-3.3 commit ee192c48265fe7dcf23bc33f6a6698bb41477ca9 Author: Thomas Marquardt Date: Wed Jun 24 18:37:25 2020 + > WASB: Update azure-storage-java SDK > --- > > Key: HADOOP-17089 > URL: https://issues.apache.org/jira/browse/HADOOP-17089 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0, 3.2.0 >Reporter: Thomas Marqardt >Assignee: Thomas Marqardt >Priority: Major > Fix For: 3.3.1 > > > WASB depends on the Azure Storage Java SDK. There is a concurrency bug in > the Azure Storage Java SDK that can cause the results of a list blobs > operation to appear empty. This causes the Filesystem listStatus and similar > APIs to return empty results. This has been seen in Spark work loads when > jobs use more than one executor core. > See [https://github.com/Azure/azure-storage-java/pull/546] for details on the > bug in the Azure Storage SDK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] asfgit merged pull request #2099: HADOOP-17089: WASB: Update azure-storage-java SDK
asfgit merged pull request #2099: URL: https://github.com/apache/hadoop/pull/2099 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet
[ https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144653#comment-17144653 ] Hadoop QA commented on HADOOP-17079: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 52s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 11 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 30m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 2s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 13s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 13m 40s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 18s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 4m 25s{color} | {color:orange} root: The patch generated 10 new + 852 unchanged - 4 fixed = 862 total (was 856) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 17s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 51s{color} | {color:red} hadoop-common-project/hadoop-common generated 15 new + 0 unchanged - 0 fixed = 15 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 29s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 51s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 59s{color} | {color:red} hadoop-hdfs-httpfs in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 27s{color} | {color:red} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 15s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 56s{color} | {color:red}
[GitHub] [hadoop] umamaheswararao commented on pull request #2092: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured.
umamaheswararao commented on pull request #2092: URL: https://github.com/apache/hadoop/pull/2092#issuecomment-649224805 For some reason Yetus is failing at last and not posting results. Here is the link: https://builds.apache.org/job/hadoop-multibranch/job/PR-2092/2/console No test failures related to this change and no additional checkstyle issues. ``` 15:36:33 15:36:33 15:36:33 checkstyle: patch 15:36:33 15:36:33 15:36:33 15:36:33 15:36:55 cd /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2092/src 15:36:55 /usr/bin/mvn --batch-mode checkstyle:checkstyle -Dcheckstyle.consoleOutput=true -Ptest-patch -DskipTests -Ptest-patch > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2092/out/buildtool-patch-checkstyle-root.txt 2>&1 15:39:31 15:39:31 root: The patch generated 0 new + 90 unchanged - 1 fixed = 90 total (was 91) ``` https://builds.apache.org/job/hadoop-multibranch/job/PR-2092/2/testReport/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2099: HADOOP-17089: WASB: Update azure-storage-java SDK
hadoop-yetus commented on pull request #2099: URL: https://github.com/apache/hadoop/pull/2099#issuecomment-649219878 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 3s | trunk passed | | +1 :green_heart: | compile | 19m 24s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 16m 49s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 2m 38s | trunk passed | | +1 :green_heart: | mvnsite | 1m 26s | trunk passed | | +1 :green_heart: | shadedclient | 18m 42s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 42s | hadoop-azure in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 16s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 5s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 36s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 41s | the patch passed | | +1 :green_heart: | compile | 18m 47s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 18m 47s | the patch passed | | +1 :green_heart: | compile | 16m 51s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 16m 51s | the patch passed | | +1 :green_heart: | checkstyle | 2m 43s | the patch passed | | +1 :green_heart: | mvnsite | 1m 25s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 8s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 43s | hadoop-azure in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 15s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | findbugs | 0m 34s | hadoop-project has no data from findbugs | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 35s | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 1m 37s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | The patch does not generate ASF License warnings. | | | | 146m 56s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2099/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2099 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 01be23e81775 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2099/3/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2099/3/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2099/3/testReport/ | | Max. process+thread count | 449 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-tools/hadoop-azure U: . | | Console output |
[jira] [Assigned] (HADOOP-17093) ABFS: GetAccessToken unrecoverable failures are being retried
[ https://issues.apache.org/jira/browse/HADOOP-17093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sneha Vijayarajan reassigned HADOOP-17093: -- Assignee: Sneha Vijayarajan > ABFS: GetAccessToken unrecoverable failures are being retried > - > > Key: HADOOP-17093 > URL: https://issues.apache.org/jira/browse/HADOOP-17093 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Fix For: 3.4.0 > > > When there is an invalid config set, call to fetch token fails with exception: > throw new UnexpectedResponseException(httpResponseCode, > requestId, > operation > + " Unexpected response." > + " Check configuration, URLs and proxy settings." > + " proxies=" + proxies, > authEndpoint, > responseContentType, > responseBody); > } > Issue here is that UnexpectedResponseException is not recognized as > irrecoverable state and ends up being retried. This needs to be fixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17092) ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds
[ https://issues.apache.org/jira/browse/HADOOP-17092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sneha Vijayarajan reassigned HADOOP-17092: -- Assignee: Sneha Vijayarajan > ABFS: Long waits and unintended retries when multiple threads try to fetch > token using ClientCreds > -- > > Key: HADOOP-17092 > URL: https://issues.apache.org/jira/browse/HADOOP-17092 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Fix For: 3.4.0 > > > Issue reported by DB: > we recently experienced some problems with ABFS driver that highlighted a > possible issue with long hangs following synchronized retries when using the > _ClientCredsTokenProvider_ and calling _AbfsClient.getAccessToken_. We have > seen > [https://github.com/apache/hadoop/pull/1923|https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fhadoop%2Fpull%2F1923=02%7c01%7csnvijaya%40microsoft.com%7c7362c5ba4af24a553c4308d807ec459d%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c637268058650442694=FePBBkEqj5kI2Ty4kNr3a2oJgB8Kvy3NvyRK8NoxyH4%3D=0], > but it does not directly apply since we are not using a custom token > provider, but instead _ClientCredsTokenProvider_ that ultimately relies on > _AzureADAuthenticator_. > > The problem was that the critical section of getAccessToken, combined with a > possibly redundant retry policy, made jobs hanging for a very long time, > since only one thread at a time could make progress, and this progress > amounted to basically retrying on a failing connection for 30-60 minutes. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17093) ABFS: GetAccessToken unrecoverable failures are being retried
Sneha Vijayarajan created HADOOP-17093: -- Summary: ABFS: GetAccessToken unrecoverable failures are being retried Key: HADOOP-17093 URL: https://issues.apache.org/jira/browse/HADOOP-17093 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Reporter: Sneha Vijayarajan Fix For: 3.4.0 When there is an invalid config set, call to fetch token fails with exception: throw new UnexpectedResponseException(httpResponseCode, requestId, operation + " Unexpected response." + " Check configuration, URLs and proxy settings." + " proxies=" + proxies, authEndpoint, responseContentType, responseBody); } Issue here is that UnexpectedResponseException is not recognized as irrecoverable state and ends up being retried. This needs to be fixed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17092) ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds
[ https://issues.apache.org/jira/browse/HADOOP-17092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sneha Vijayarajan updated HADOOP-17092: --- Description: Issue reported by DB: we recently experienced some problems with ABFS driver that highlighted a possible issue with long hangs following synchronized retries when using the _ClientCredsTokenProvider_ and calling _AbfsClient.getAccessToken_. We have seen [https://github.com/apache/hadoop/pull/1923|https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fhadoop%2Fpull%2F1923=02%7c01%7csnvijaya%40microsoft.com%7c7362c5ba4af24a553c4308d807ec459d%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c637268058650442694=FePBBkEqj5kI2Ty4kNr3a2oJgB8Kvy3NvyRK8NoxyH4%3D=0], but it does not directly apply since we are not using a custom token provider, but instead _ClientCredsTokenProvider_ that ultimately relies on _AzureADAuthenticator_. The problem was that the critical section of getAccessToken, combined with a possibly redundant retry policy, made jobs hanging for a very long time, since only one thread at a time could make progress, and this progress amounted to basically retrying on a failing connection for 30-60 minutes. > ABFS: Long waits and unintended retries when multiple threads try to fetch > token using ClientCreds > -- > > Key: HADOOP-17092 > URL: https://issues.apache.org/jira/browse/HADOOP-17092 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sneha Vijayarajan >Priority: Major > Fix For: 3.4.0 > > > Issue reported by DB: > we recently experienced some problems with ABFS driver that highlighted a > possible issue with long hangs following synchronized retries when using the > _ClientCredsTokenProvider_ and calling _AbfsClient.getAccessToken_. We have > seen > [https://github.com/apache/hadoop/pull/1923|https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fhadoop%2Fpull%2F1923=02%7c01%7csnvijaya%40microsoft.com%7c7362c5ba4af24a553c4308d807ec459d%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c637268058650442694=FePBBkEqj5kI2Ty4kNr3a2oJgB8Kvy3NvyRK8NoxyH4%3D=0], > but it does not directly apply since we are not using a custom token > provider, but instead _ClientCredsTokenProvider_ that ultimately relies on > _AzureADAuthenticator_. > > The problem was that the critical section of getAccessToken, combined with a > possibly redundant retry policy, made jobs hanging for a very long time, > since only one thread at a time could make progress, and this progress > amounted to basically retrying on a failing connection for 30-60 minutes. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17092) ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds
Sneha Vijayarajan created HADOOP-17092: -- Summary: ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds Key: HADOOP-17092 URL: https://issues.apache.org/jira/browse/HADOOP-17092 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Reporter: Sneha Vijayarajan Fix For: 3.4.0 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] umamaheswararao commented on pull request #2092: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured.
umamaheswararao commented on pull request #2092: URL: https://github.com/apache/hadoop/pull/2092#issuecomment-649195125 Thanks a lot @rakeshadr for the discussion offline. I relaxed the restriction on creating parent dirs in the case of fallback does not have the directory structure same as internal mount path. That make sense because anyway user can create that dir tree in fallback when the given directory has two levels deeper from internal mount dir. I updated the patch and added another test to cover it. Please take a look at it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144615#comment-17144615 ] Hadoop QA commented on HADOOP-17083: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} branch-2.10 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 30s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 55s{color} | {color:green} branch-2.10 passed with JDK Oracle Corporation-1.7.0_95-b00 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 15s{color} | {color:green} branch-2.10 passed with JDK Private Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 8s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 51s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 5s{color} | {color:green} branch-2.10 passed with JDK Oracle Corporation-1.7.0_95-b00 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 3s{color} | {color:green} branch-2.10 passed with JDK Private Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 9s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 15s{color} | {color:blue} branch/hadoop-project no findbugs output file (findbugsXml.xml) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 51s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.10 has 14 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in branch-2.10 has 1 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 29s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2.10 has 10 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 8m 51s{color} | {color:red} hadoop-yarn-project/hadoop-yarn in branch-2.10 has 6 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 12s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in branch-2.10 has 1 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 7s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core in branch-2.10 has 3 extant findbugs warnings. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 29s{color} | {color:green} the patch passed with JDK Oracle Corporation-1.7.0_95-b00 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 29s{color} | {color:red} root-jdkOracleCorporation-1.7.0_95-b00 with JDK Oracle Corporation-1.7.0_95-b00 generated 16 new + 1435 unchanged - 0 fixed = 1451 total (was 1435) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 29s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 29s{color} | {color:red}
[jira] [Commented] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet
[ https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144610#comment-17144610 ] Hadoop QA commented on HADOOP-17079: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 11 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 25m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 37s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 52s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 4s{color} | {color:orange} root: The patch generated 10 new + 852 unchanged - 4 fixed = 862 total (was 856) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 7m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 49s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 48s{color} | {color:red} hadoop-common-project/hadoop-common generated 15 new + 0 unchanged - 0 fixed = 15 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 7s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 21s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 25s{color} | {color:red} hadoop-hdfs-httpfs in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 22s{color} | {color:red} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m 45s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 5s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 32s{color} | {color:red}
[GitHub] [hadoop] ThomasMarquardt commented on a change in pull request #2099: HADOOP-17089: WASB: Update azure-storage-java SDK
ThomasMarquardt commented on a change in pull request #2099: URL: https://github.com/apache/hadoop/pull/2099#discussion_r445274808 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java ## @@ -130,15 +131,55 @@ public void testConcurrentDeleteFile() throws Exception { } } + /** + * Validate the bug fix for HADOOP-17089. Please note that we were never + * able to reproduce this except during a Spark job that ran for multiple days + * and in a hacked-up azure-storage SDK that added sleep before and after + * the call to factory.setNamespaceAware(true) as shown in the description of + * https://github.com/Azure/azure-storage-java/pull/546. + */ + @Test(timeout = TEST_EXECUTION_TIMEOUT) + public void testConcurrentList() throws Exception { +final Path testDir = new Path("/tmp/data-loss/11230174258112/_temporary/0/_temporary/attempt_20200624190514_0006_m_0"); +final Path testFile = new Path(testDir, "part-4-15ea87b1-312c-4fdf-1820-95afb3dfc1c3-a010.snappy.parquet"); +fs.create(testFile).close(); +List tasks = new ArrayList<>(THREAD_COUNT); + +for (int i = 0; i < THREAD_COUNT; i++) { + tasks.add(new ListTask(fs, testDir)); +} + +ExecutorService es = null; +try { + es = Executors.newFixedThreadPool(THREAD_COUNT); + + List> futures = es.invokeAll(tasks); + + for (Future future : futures) { +Assert.assertTrue(future.isDone()); + +// we are using Callable, so if an exception +// occurred during the operation, it will be thrown +// when we call get +long fileCount = future.get(); +assertEquals("The list should always contain 1 file.",1, fileCount); Review comment: Thanks for the review Da. Fixed! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajith opened a new pull request #2100: HDFS-15436. Default mount table name used by ViewFileSystem should be configurable
virajith opened a new pull request #2100: URL: https://github.com/apache/hadoop/pull/2100 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] DadanielZ commented on a change in pull request #2099: HADOOP-17089: WASB: Update azure-storage-java SDK
DadanielZ commented on a change in pull request #2099: URL: https://github.com/apache/hadoop/pull/2099#discussion_r445266567 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestNativeAzureFileSystemConcurrencyLive.java ## @@ -130,15 +131,55 @@ public void testConcurrentDeleteFile() throws Exception { } } + /** + * Validate the bug fix for HADOOP-17089. Please note that we were never + * able to reproduce this except during a Spark job that ran for multiple days + * and in a hacked-up azure-storage SDK that added sleep before and after + * the call to factory.setNamespaceAware(true) as shown in the description of + * https://github.com/Azure/azure-storage-java/pull/546. + */ + @Test(timeout = TEST_EXECUTION_TIMEOUT) + public void testConcurrentList() throws Exception { +final Path testDir = new Path("/tmp/data-loss/11230174258112/_temporary/0/_temporary/attempt_20200624190514_0006_m_0"); +final Path testFile = new Path(testDir, "part-4-15ea87b1-312c-4fdf-1820-95afb3dfc1c3-a010.snappy.parquet"); +fs.create(testFile).close(); +List tasks = new ArrayList<>(THREAD_COUNT); + +for (int i = 0; i < THREAD_COUNT; i++) { + tasks.add(new ListTask(fs, testDir)); +} + +ExecutorService es = null; +try { + es = Executors.newFixedThreadPool(THREAD_COUNT); + + List> futures = es.invokeAll(tasks); + + for (Future future : futures) { +Assert.assertTrue(future.isDone()); + +// we are using Callable, so if an exception +// occurred during the operation, it will be thrown +// when we call get +long fileCount = future.get(); +assertEquals("The list should always contain 1 file.",1, fileCount); Review comment: Yetus complains here: `assertEquals("The list should always contain 1 file.",1, fileCount);:62: ',' is not followed by whitespace. [WhitespaceAfter]` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2099: HADOOP-17089: WASB: Update azure-storage-java SDK
hadoop-yetus commented on pull request #2099: URL: https://github.com/apache/hadoop/pull/2099#issuecomment-649165774 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 2s | trunk passed | | +1 :green_heart: | compile | 19m 20s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 16m 54s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 3m 17s | trunk passed | | +1 :green_heart: | mvnsite | 1m 28s | trunk passed | | +1 :green_heart: | shadedclient | 19m 21s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 43s | hadoop-azure in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 16s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 5s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 36s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 42s | the patch passed | | +1 :green_heart: | compile | 18m 46s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 18m 46s | the patch passed | | +1 :green_heart: | compile | 16m 51s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 16m 51s | the patch passed | | -0 :warning: | checkstyle | 2m 39s | root: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 1m 24s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 6s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 42s | hadoop-azure in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 16s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | findbugs | 0m 35s | hadoop-project has no data from findbugs | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 34s | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 1m 37s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 55s | The patch does not generate ASF License warnings. | | | | 147m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2099/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2099 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 75880192a261 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2099/2/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-2099/2/artifact/out/diff-checkstyle-root.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2099/2/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2099/2/testReport/ | |
[jira] [Commented] (HADOOP-17089) WASB: Update azure-storage-java SDK
[ https://issues.apache.org/jira/browse/HADOOP-17089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144571#comment-17144571 ] Thomas Marqardt commented on HADOOP-17089: -- Updated with test at [https://github.com/apache/hadoop/pull/2099.patch|https://github.com/apache/hadoop/pull/2099]. > WASB: Update azure-storage-java SDK > --- > > Key: HADOOP-17089 > URL: https://issues.apache.org/jira/browse/HADOOP-17089 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0, 3.2.0 >Reporter: Thomas Marqardt >Assignee: Thomas Marqardt >Priority: Major > > WASB depends on the Azure Storage Java SDK. There is a concurrency bug in > the Azure Storage Java SDK that can cause the results of a list blobs > operation to appear empty. This causes the Filesystem listStatus and similar > APIs to return empty results. This has been seen in Spark work loads when > jobs use more than one executor core. > See [https://github.com/Azure/azure-storage-java/pull/546] for details on the > bug in the Azure Storage SDK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17072) Add getClusterRoot and getClusterRoots methods to FileSystem and ViewFilesystem
[ https://issues.apache.org/jira/browse/HADOOP-17072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1712#comment-1712 ] Virajith Jalaparti commented on HADOOP-17072: - Thanks for the feedback! [~ste...@apache.org] , thanks for listing the requirements for FileSystem changes. I am inclined to agree with [~umamaheswararao] 's suggestion of having this in a util class and not in FileSystem as sufficient APIs are already exposed to enable the same functionality. At this point, any application can actually do this implementation themselves and there's not much to do at the FS layer. > Add getClusterRoot and getClusterRoots methods to FileSystem and > ViewFilesystem > --- > > Key: HADOOP-17072 > URL: https://issues.apache.org/jira/browse/HADOOP-17072 > Project: Hadoop Common > Issue Type: Task > Components: fs, viewfs >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti >Priority: Major > Attachments: HADOOP-17072.001.patch > > > In a federated setting (HDFS federation, federation across multiple buckets > on S3, multiple containers across Azure storage), certain system > tools/pipelines require the ability to map paths to the clusters/accounts. > Consider the example of GDPR compliance/retention jobs that need to go over > various datasets, ingested over a period of T days and remove/quarantine > datasets that are not properly annotated/have reached their retention period. > Such jobs can rely on renames to a global trash/quarantine directory to > accomplish their task. However, in a federated setting, efficient, atomic > renames (as those within a single HDFS cluster) are not supported across the > different clusters/shards in federation. As a result, such jobs will need to > leverage a trash/quarantine directory per cluster/shard. Further, they would > need to map from a particular path to the cluster/shard that contains this > path. > To address such cases, this JIRA proposes to get add two new methods to > {{FileSystem}}: {{getClusterRoot}} and {{getClusterRoots()}}. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet
[ https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-17079: Attachment: HADOOP-17079.002.patch > Optimize UGI#getGroups by adding UGI#getGroupsSet > - > > Key: HADOOP-17079 > URL: https://issues.apache.org/jira/browse/HADOOP-17079 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Attachments: HADOOP-17079.002.patch > > > UGI#getGroups has been optimized with HADOOP-13442 by avoiding the > List->Set->List conversion. However the returned list is not optimized to > contains lookup, especially the user's group membership list is huge > (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use > Set#contains() instead of List#contains() to speed up large group look up > while minimize List->Set conversions in Groups#getGroups() call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet
[ https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144288#comment-17144288 ] Xiaoyu Yao commented on HADOOP-17079: - Attach the patch file to trigger Jenkins. The PR link somehow does not work for me. > Optimize UGI#getGroups by adding UGI#getGroupsSet > - > > Key: HADOOP-17079 > URL: https://issues.apache.org/jira/browse/HADOOP-17079 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Attachments: HADOOP-17079.002.patch > > > UGI#getGroups has been optimized with HADOOP-13442 by avoiding the > List->Set->List conversion. However the returned list is not optimized to > contains lookup, especially the user's group membership list is huge > (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use > Set#contains() instead of List#contains() to speed up large group look up > while minimize List->Set conversions in Groups#getGroups() call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.
xiaoyuyao commented on pull request #2085: URL: https://github.com/apache/hadoop/pull/2085#issuecomment-649038401 Thanks @Hexiaoqiao fro the review and positive results of similar changes in your deployment. I've addressed the feedback in the new commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144254#comment-17144254 ] Hadoop QA commented on HADOOP-17083: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} branch-2.10 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 31s{color} | {color:green} branch-2.10 passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 3s{color} | {color:red} root in branch-2.10 failed with JDK Oracle Corporation-1.7.0_95-b00. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 43s{color} | {color:green} branch-2.10 passed with JDK Private Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 3s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 31s{color} | {color:green} branch-2.10 passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s{color} | {color:red} hadoop-project in branch-2.10 failed with JDK Oracle Corporation-1.7.0_95-b00. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s{color} | {color:red} hadoop-common in branch-2.10 failed with JDK Oracle Corporation-1.7.0_95-b00. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s{color} | {color:red} hadoop-hdfs in branch-2.10 failed with JDK Oracle Corporation-1.7.0_95-b00. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s{color} | {color:red} hadoop-hdfs-client in branch-2.10 failed with JDK Oracle Corporation-1.7.0_95-b00. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s{color} | {color:red} hadoop-hdfs-rbf in branch-2.10 failed with JDK Oracle Corporation-1.7.0_95-b00. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s{color} | {color:red} hadoop-mapreduce-client-core in branch-2.10 failed with JDK Oracle Corporation-1.7.0_95-b00. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s{color} | {color:red} hadoop-yarn-common in branch-2.10 failed with JDK Oracle Corporation-1.7.0_95-b00. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.10 failed with JDK Oracle Corporation-1.7.0_95-b00. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 14s{color} | {color:green} branch-2.10 passed with JDK Private Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 59s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | {color:blue} branch/hadoop-project no findbugs output file (findbugsXml.xml) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 1s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.10 has 14 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2.10 has 10 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 12s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in branch-2.10 has 1 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 35s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core in branch-2.10 has 3 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 50s{color} | {color:red}
[GitHub] [hadoop] hadoop-yetus commented on pull request #2099: HADOOP-17089: WASB: Update azure-storage-java SDK
hadoop-yetus commented on pull request #2099: URL: https://github.com/apache/hadoop/pull/2099#issuecomment-649038099 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 53s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 26m 16s | trunk passed | | +1 :green_heart: | compile | 0m 21s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 18s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | mvnsite | 0m 22s | trunk passed | | +1 :green_heart: | shadedclient | 45m 36s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 20s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javadoc | 0m 19s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 12s | the patch passed | | +1 :green_heart: | compile | 0m 12s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 12s | the patch passed | | +1 :green_heart: | compile | 0m 11s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 11s | the patch passed | | +1 :green_heart: | mvnsite | 0m 15s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 17m 1s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 21s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javadoc | 0m 17s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 19s | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 69m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2099/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2099 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 50363da089ad 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2099/1/testReport/ | | Max. process+thread count | 446 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2099/1/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17083: --- Attachment: HADOOP-17083-branch-2.10.004.patch > Update guava to 27.0-jre in hadoop branch-2.10 > -- > > Key: HADOOP-17083 > URL: https://issues.apache.org/jira/browse/HADOOP-17083 > Project: Hadoop Common > Issue Type: Bug > Components: common, security >Affects Versions: 2.10.0 >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17083-branch-2.10.001.patch, > HADOOP-17083-branch-2.10.002.patch, HADOOP-17083-branch-2.10.003.patch, > HADOOP-17083-branch-2.10.004.patch > > > com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found > [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237]. > > The upgrade should not affect the version of java used. branch-2.10 still > sticks to JDK7 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.
xiaoyuyao commented on a change in pull request #2085: URL: https://github.com/apache/hadoop/pull/2085#discussion_r445132957 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java ## @@ -549,7 +549,6 @@ private boolean hasPermission(INodeAttributes inode, FsAction access) { * - Default entries may be present, but they are ignored during enforcement. * * @param inode INodeAttributes accessed inode - * @param snapshotId int snapshot ID Review comment: I just reverted this change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.
xiaoyuyao commented on a change in pull request #2085: URL: https://github.com/apache/hadoop/pull/2085#discussion_r445133102 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterUserMappings.java ## @@ -111,6 +112,11 @@ public void cacheGroupsRefresh() throws IOException { @Override public void cacheGroupsAdd(List groups) throws IOException { } + +@Override +public Set getGroupsSet(String user) throws IOException { + return null; Review comment: Fixed with similar logic. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.
xiaoyuyao commented on a change in pull request #2085: URL: https://github.com/apache/hadoop/pull/2085#discussion_r445130674 ## File path: hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/DummyGroupMapping.java ## @@ -47,4 +48,9 @@ public void cacheGroupsRefresh() throws IOException { @Override public void cacheGroupsAdd(List groups) throws IOException { } + + @Override + public Set getGroupsSet(String user) throws IOException { +return null; Review comment: Fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.
xiaoyuyao commented on a change in pull request #2085: URL: https://github.com/apache/hadoop/pull/2085#discussion_r445128651 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AccessControlList.java ## @@ -20,10 +20,7 @@ import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; -import java.util.Collection; -import java.util.HashSet; -import java.util.LinkedList; -import java.util.List; +import java.util.*; Review comment: Fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.
xiaoyuyao commented on a change in pull request #2085: URL: https://github.com/apache/hadoop/pull/2085#discussion_r445128078 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMapping.java ## @@ -19,9 +19,9 @@ package org.apache.hadoop.security; import java.io.IOException; -import java.util.Arrays; -import java.util.List; +import java.util.*; Review comment: IntelliJ auto folded the imports. I will fix them in the next commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
hadoop-yetus commented on pull request #2080: URL: https://github.com/apache/hadoop/pull/2080#issuecomment-649027770 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 20m 13s | trunk passed | | +1 :green_heart: | compile | 0m 40s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 35s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 23s | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | trunk passed | | +1 :green_heart: | shadedclient | 14m 25s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 25s | hadoop-hdfs-rbf in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 32s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 13s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 11s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | the patch passed | | +1 :green_heart: | compile | 0m 39s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 39s | the patch passed | | +1 :green_heart: | compile | 0m 30s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 30s | the patch passed | | +1 :green_heart: | checkstyle | 0m 16s | the patch passed | | +1 :green_heart: | mvnsite | 0m 33s | the patch passed | | +1 :green_heart: | whitespace | 0m 1s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 59s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 26s | hadoop-hdfs-rbf in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 28s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 1m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 8m 41s | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | The patch does not generate ASF License warnings. | | | | 70m 29s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2080 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 1f3028ae3fc0 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/7/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/7/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/7/testReport/ | | Max. process+thread count | 3326 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/7/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.
xiaoyuyao commented on a change in pull request #2085: URL: https://github.com/apache/hadoop/pull/2085#discussion_r445128418 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/NullGroupsMapping.java ## @@ -31,6 +33,19 @@ public void cacheGroupsAdd(List groups) { } + /** + * Get all various group memberships of a given user. + * Returns EMPTY set in case of non-existing user + * + * @param user User's name + * @return set of group memberships of user + * @throws IOException + */ + @Override + public Set getGroupsSet(String user) throws IOException { +return null; Review comment: Fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.
xiaoyuyao commented on a change in pull request #2085: URL: https://github.com/apache/hadoop/pull/2085#discussion_r445127216 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java ## @@ -345,28 +373,28 @@ public long read() { * implementation, otherwise is arranges for the cache to be updated later */ @Override -public ListenableFuture> reload(final String key, - List oldValue) +public ListenableFuture> reload(final String key, + Set oldValue) throws Exception { LOG.debug("GroupCacheLoader - reload (async)."); if (!reloadGroupsInBackground) { return super.reload(key, oldValue); } backgroundRefreshQueued.incrementAndGet(); - ListenableFuture> listenableFuture = - executorService.submit(new Callable>() { + ListenableFuture> listenableFuture = + executorService.submit(new Callable>() { Review comment: Good catch. Fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17090) Increase precommit job timeout from 5 hours to 20 hours
[ https://issues.apache.org/jira/browse/HADOOP-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144207#comment-17144207 ] Ayush Saxena commented on HADOOP-17090: --- Thanx [~aajisaka] for putting this up. Indeed this is an overwork everytime, we have big changes or changes to parent POM level. I am +1 increasing the default.. > Increase precommit job timeout from 5 hours to 20 hours > --- > > Key: HADOOP-17090 > URL: https://issues.apache.org/jira/browse/HADOOP-17090 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Priority: Major > > Now we frequently increase the timeout for testing and undo the change before > committing. > * https://github.com/apache/hadoop/pull/2026 > * https://github.com/apache/hadoop/pull/2051 > * https://github.com/apache/hadoop/pull/2012 > * https://github.com/apache/hadoop/pull/2098 > * and more... > I'd like to increase the timeout by default to reduce the work. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ThomasMarquardt opened a new pull request #2099: HADOOP-17089: WASB: Update azure-storage-java SDK
ThomasMarquardt opened a new pull request #2099: URL: https://github.com/apache/hadoop/pull/2099 HADOOP-17089: WASB: Update azure-storage-java SDK DETAILS: WASB depends on the Azure Storage Java SDK. There is a concurrency bug in the Azure Storage Java SDK that can cause the results of a list blobs operation to appear empty. This causes the Filesystem listStatus and similar APIs to return empty results. This has been seen in Spark work loads when jobs use more than one executor core. See https://github.com/Azure/azure-storage-java/pull/546 for details on the bug in the Azure Storage SDK. TESTS: No new tests have been added. All existing tests are passing: wasb: mvn -T 1C -Dparallel-tests=wasb -Dscale -DtestsThreadCount=8 clean verify Tests run: 248, Failures: 0, Errors: 0, Skipped: 11 Tests run: 650, Failures: 0, Errors: 0, Skipped: 65 abfs: mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify Tests run: 64, Failures: 0, Errors: 0, Skipped: 0 Tests run: 437, Failures: 0, Errors: 0, Skipped: 33 Tests run: 206, Failures: 0, Errors: 0, Skipped: 24 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2083: HADOOP-17077. S3A delegation token binding to support secondary binding list
hadoop-yetus commented on pull request #2083: URL: https://github.com/apache/hadoop/pull/2083#issuecomment-648981862 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 55s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 25m 15s | trunk passed | | +1 :green_heart: | compile | 0m 47s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 35s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 28s | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | trunk passed | | +1 :green_heart: | shadedclient | 17m 37s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 35s | hadoop-aws in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 27s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 21s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 18s | trunk passed | | -0 :warning: | patch | 1m 36s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 40s | the patch passed | | +1 :green_heart: | compile | 0m 38s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 38s | the patch passed | | +1 :green_heart: | compile | 0m 30s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 30s | the patch passed | | -0 :warning: | checkstyle | 0m 21s | hadoop-tools/hadoop-aws: The patch generated 9 new + 18 unchanged - 2 fixed = 27 total (was 20) | | +1 :green_heart: | mvnsite | 0m 33s | the patch passed | | -1 :x: | whitespace | 0m 0s | The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedclient | 16m 33s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 27s | hadoop-aws in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | -1 :x: | findbugs | 1m 15s | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 18s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | The patch does not generate ASF License warnings. | | | | 74m 52s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-aws | | | Unread field:SecondaryDelegationToken.java:[line 186] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2083 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux ae74b57eb298 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/4/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | whitespace |
[GitHub] [hadoop] sunchao commented on pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
sunchao commented on pull request #2080: URL: https://github.com/apache/hadoop/pull/2080#issuecomment-648979719 Thanks @Hexiaoqiao . Similarly, in our case we also disabled this for router metrics but need webhdfs on router. Relying on `getDatanodeReport` is not the ideal approach but this PR is a step-up on the existing approach. @NickyYe thanks for addressing the checkstyle issue. Could you also add a unit test as well? I think we need one for `getCachedDatanodeReport`. You can add it in `TestRouterRpc` and verify the cache refresh logic. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17090) Increase precommit job timeout from 5 hours to 20 hours
[ https://issues.apache.org/jira/browse/HADOOP-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17090: --- Description: Now we frequently increase the timeout for testing and undo the change before committing. * https://github.com/apache/hadoop/pull/2026 * https://github.com/apache/hadoop/pull/2051 * https://github.com/apache/hadoop/pull/2012 * https://github.com/apache/hadoop/pull/2098 * and more... I'd like to increase the timeout by default to reduce the work. was: Now we frequently increase the timeout for testing and undo the change before committing. * https://github.com/apache/hadoop/pull/2026 * https://github.com/apache/hadoop/pull/2051 * https://github.com/apache/hadoop/pull/2012 * and more... I'd like to increase the timeout by default to reduce the work. > Increase precommit job timeout from 5 hours to 20 hours > --- > > Key: HADOOP-17090 > URL: https://issues.apache.org/jira/browse/HADOOP-17090 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Priority: Major > > Now we frequently increase the timeout for testing and undo the change before > committing. > * https://github.com/apache/hadoop/pull/2026 > * https://github.com/apache/hadoop/pull/2051 > * https://github.com/apache/hadoop/pull/2012 > * https://github.com/apache/hadoop/pull/2098 > * and more... > I'd like to increase the timeout by default to reduce the work. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17091) Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"
[ https://issues.apache.org/jira/browse/HADOOP-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144112#comment-17144112 ] Akira Ajisaka commented on HADOOP-17091: Moved to Hadoop common. > Javadoc failing with "cannot find symbol > com.google.protobuf.GeneratedMessageV3 implements" > > > Key: HADOOP-17091 > URL: https://issues.apache.org/jira/browse/HADOOP-17091 > Project: Hadoop Common > Issue Type: Bug > Components: build > Environment: Java 11 >Reporter: Uma Maheswara Rao G >Assignee: Akira Ajisaka >Priority: Major > > {noformat} > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 17.982 s > [INFO] Finished at: 2020-06-20T01:56:28Z > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on > project hadoop-hdfs: An error has occurred in Javadoc report generation: > [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version > as HTML 4.01 by using the -html4 option. > [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be > removed > [ERROR] in a future release. To suppress this warning, please ensure that any > HTML constructs > [ERROR] in your comments are valid in HTML5, and remove the -html4 option. > [ERROR] > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25197: > error: cannot find symbol > [ERROR] com.google.protobuf.GeneratedMessageV3 implements > [ERROR] ^ > [ERROR] symbol: class GeneratedMessageV3 > [ERROR] location: package com.google.protobuf > [ERROR] > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25319: > error: cannot find symbol > [ERROR] com.google.protobuf.GeneratedMessageV3 implements > [ERROR]^ > [ERROR] symbol: class GeneratedMessageV3 > [ERROR] location: package com.google.protobuf > [ERROR] > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26068: > error: cannot find symbol > [ERROR] com.google.protobuf.GeneratedMessageV3 implements > [ERROR]^ > [ERROR] symbol: class GeneratedMessageV3 > [ERROR] location: package com.google.protobuf > [ERROR] > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26073: > error: package com.google.protobuf.GeneratedMessageV3 does not exist > [ERROR] private > PersistToken(com.google.protobuf.GeneratedMessageV3.Builder builder) { > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-17091) Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"
[ https://issues.apache.org/jira/browse/HADOOP-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka moved HDFS-15424 to HADOOP-17091: --- Component/s: (was: build) build Key: HADOOP-17091 (was: HDFS-15424) Project: Hadoop Common (was: Hadoop HDFS) > Javadoc failing with "cannot find symbol > com.google.protobuf.GeneratedMessageV3 implements" > > > Key: HADOOP-17091 > URL: https://issues.apache.org/jira/browse/HADOOP-17091 > Project: Hadoop Common > Issue Type: Bug > Components: build > Environment: Java 11 >Reporter: Uma Maheswara Rao G >Assignee: Akira Ajisaka >Priority: Major > > {noformat} > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 17.982 s > [INFO] Finished at: 2020-06-20T01:56:28Z > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on > project hadoop-hdfs: An error has occurred in Javadoc report generation: > [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version > as HTML 4.01 by using the -html4 option. > [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be > removed > [ERROR] in a future release. To suppress this warning, please ensure that any > HTML constructs > [ERROR] in your comments are valid in HTML5, and remove the -html4 option. > [ERROR] > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25197: > error: cannot find symbol > [ERROR] com.google.protobuf.GeneratedMessageV3 implements > [ERROR] ^ > [ERROR] symbol: class GeneratedMessageV3 > [ERROR] location: package com.google.protobuf > [ERROR] > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25319: > error: cannot find symbol > [ERROR] com.google.protobuf.GeneratedMessageV3 implements > [ERROR]^ > [ERROR] symbol: class GeneratedMessageV3 > [ERROR] location: package com.google.protobuf > [ERROR] > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26068: > error: cannot find symbol > [ERROR] com.google.protobuf.GeneratedMessageV3 implements > [ERROR]^ > [ERROR] symbol: class GeneratedMessageV3 > [ERROR] location: package com.google.protobuf > [ERROR] > /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26073: > error: package com.google.protobuf.GeneratedMessageV3 does not exist > [ERROR] private > PersistToken(com.google.protobuf.GeneratedMessageV3.Builder builder) { > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17090) Increase precommit job timeout from 5 hours to 20 hours
Akira Ajisaka created HADOOP-17090: -- Summary: Increase precommit job timeout from 5 hours to 20 hours Key: HADOOP-17090 URL: https://issues.apache.org/jira/browse/HADOOP-17090 Project: Hadoop Common Issue Type: Improvement Components: build Reporter: Akira Ajisaka Now we frequently increase the timeout for testing and undo the change before committing. * https://github.com/apache/hadoop/pull/2026 * https://github.com/apache/hadoop/pull/2051 * https://github.com/apache/hadoop/pull/2012 * and more... I'd like to increase the timeout by default to reduce the work. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17088) Failed to load Xinclude files with relative path in case of loading conf via URI
[ https://issues.apache.org/jira/browse/HADOOP-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144036#comment-17144036 ] Yushi Hayasaka edited comment on HADOOP-17088 at 6/24/20, 4:53 PM: --- [~jeagles] Thanks for the comment. But I think it is not restricted now even if the included files are provided as an absolute path (i.e. it goes without this patch) or the path of configuration file is provided as String (not URI), right? was (Author: yhaya): [~jeagles] Thanks for the comment. But I think it is not restricted now even if the included files are provided as an absolute path (i.e. it goes without this patch) or the path of configuration file is provided as String, right? > Failed to load Xinclude files with relative path in case of loading conf via > URI > > > Key: HADOOP-17088 > URL: https://issues.apache.org/jira/browse/HADOOP-17088 > Project: Hadoop Common > Issue Type: Bug >Reporter: Yushi Hayasaka >Priority: Major > > When we create a configuration file, which load a external XML file with > relative path, and try to load it via calling `Configuration.addResource` > with `Path(URI)`, we got an error, which failed to load a external XML, after > https://issues.apache.org/jira/browse/HADOOP-14216 is merged. > {noformat} > Exception in thread "main" java.lang.RuntimeException: java.io.IOException: > Fetch fail on include for 'mountTable.xml' with no fallback while loading > 'file:/opt/hadoop/etc/hadoop/core-site.xml' > at > org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021) > at > org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973) > at > org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) > at > org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896) > at com.company.test.Main.main(Main.java:29) > Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' > with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' > at > org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3271) > at > org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3331) > at > org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) > at > org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) > ... 4 more > {noformat} > The cause is that the URI is passed as string to java.io.File constructor and > File does not support the file URI, so my suggestion is trying to convert > from string to URI at first. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17088) Failed to load Xinclude files with relative path in case of loading conf via URI
[ https://issues.apache.org/jira/browse/HADOOP-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144036#comment-17144036 ] Yushi Hayasaka commented on HADOOP-17088: - [~jeagles] Thanks for the comment. But I think it is not restricted now even if the included files are provided as an absolute path (i.e. it goes without this patch) or the path of configuration file is provided as String, right? > Failed to load Xinclude files with relative path in case of loading conf via > URI > > > Key: HADOOP-17088 > URL: https://issues.apache.org/jira/browse/HADOOP-17088 > Project: Hadoop Common > Issue Type: Bug >Reporter: Yushi Hayasaka >Priority: Major > > When we create a configuration file, which load a external XML file with > relative path, and try to load it via calling `Configuration.addResource` > with `Path(URI)`, we got an error, which failed to load a external XML, after > https://issues.apache.org/jira/browse/HADOOP-14216 is merged. > {noformat} > Exception in thread "main" java.lang.RuntimeException: java.io.IOException: > Fetch fail on include for 'mountTable.xml' with no fallback while loading > 'file:/opt/hadoop/etc/hadoop/core-site.xml' > at > org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021) > at > org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973) > at > org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) > at > org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896) > at com.company.test.Main.main(Main.java:29) > Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' > with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' > at > org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3271) > at > org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3331) > at > org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) > at > org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) > ... 4 more > {noformat} > The cause is that the URI is passed as string to java.io.File constructor and > File does not support the file URI, so my suggestion is trying to convert > from string to URI at first. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17015) ABFS: Make PUT and POST operations idempotent
[ https://issues.apache.org/jira/browse/HADOOP-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Marqardt resolved HADOOP-17015. -- Resolution: Fixed Sneha and I discussed this. The common Hadoop scenario is a case where you have one or more tasks, each operating on different source files, all attempting to rename to a common destination. In this scenario, the fix in PR 2021 is correct. There are scenarios where PR 2021 will lead to incorrect results, but they seem to be very contrived and unlikely in Hadoop. A work item will be opened to investigate the need to improve this on the server-side, for example by allowing an operation-id to be passed to the rename operation and persisted in the destination metadata, but for now we have this fix to the driver on the client-side. > ABFS: Make PUT and POST operations idempotent > - > > Key: HADOOP-17015 > URL: https://issues.apache.org/jira/browse/HADOOP-17015 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Fix For: 3.4.0 > > > Currently when a PUT or POST operation timeouts and the server has already > successfully executed the operation, there is no check in driver to see if > the operation did succeed or not and just retries the same operation again. > This can cause driver to through invalid user errors. > > Sample scenario: > # Rename request times out. Though server has successfully executed the > operation. > # Driver retries rename and get source not found error. > In the scenario, driver needs to check if rename is being retried and success > if source if not found, but destination is present. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143945#comment-17143945 ] Ahmed Hussein commented on HADOOP-17083: *findbugs Errors:* *===* *FindBugs module: hadoop-common-project/hadoop-common* * Null passed for non-null parameter of writeXml(String, Writer) in org.apache.hadoop.conf.Configuration.writeXml(Writer) At Configuration.java:of writeXml(String, Writer) in org.apache.hadoop.conf.Configuration.writeXml(Writer) At Configuration.java:[line 3051] * Null passed for non-null parameter of com.google.common.base.Optional.fromNullable(Object) in org.apache.hadoop.conf.ReconfigurableBase$ReconfigurationThread.run() Method invoked at ReconfigurableBase.java:of com.google.common.base.Optional.fromNullable(Object) in org.apache.hadoop.conf.ReconfigurableBase$ReconfigurationThread.run() Method invoked at ReconfigurableBase.java:[line 151] *FindBugs module: hadoop-hdfs-project/hadoop-hdfs* * Null passed for non-null parameter of com.google.common.base.Preconditions.checkState(boolean, String, Object, Object, Object) in org.apache.hadoop.hdfs.qjournal.server.Journal.getPersistedPaxosData(long) Method invoked at Journal.java:of com.google.common.base.Preconditions.checkState(boolean, String, Object, Object, Object) in org.apache.hadoop.hdfs.qjournal.server.Journal.getPersistedPaxosData(long) Method invoked at Journal.java:[line 1057] * Null passed for non-null parameter of com.google.common.base.Preconditions.checkArgument(boolean, String, Object) in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getLogDir(String) Method invoked at JournalNode.java:of com.google.common.base.Preconditions.checkArgument(boolean, String, Object) in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getLogDir(String) Method invoked at JournalNode.java:[line 256] * Nullcheck of jid at line 259 of value previously dereferenced in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getLogDir(String) At JournalNode.java:259 of value previously dereferenced in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getLogDir(String) At JournalNode.java:[line 256] *FindBugs module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager* * Null passed for non-null parameter of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore, RMStateStoreEvent) At RMStateStore.java:of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore, RMStateStoreEvent) At RMStateStore.java:[line 267] * Null passed for non-null parameter of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateApplicationPriority(Priority, ApplicationId, SettableFuture, UserGroupInformation) At CapacityScheduler.java:of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateApplicationPriority(Priority, ApplicationId, SettableFuture, UserGroupInformation) At CapacityScheduler.java:[line 2335] > Update guava to 27.0-jre in hadoop branch-2.10 > -- > > Key: HADOOP-17083 > URL: https://issues.apache.org/jira/browse/HADOOP-17083 > Project: Hadoop Common > Issue Type: Bug > Components: common, security >Affects Versions: 2.10.0 >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17083-branch-2.10.001.patch, > HADOOP-17083-branch-2.10.002.patch, HADOOP-17083-branch-2.10.003.patch > > > com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found > [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237]. > > The upgrade should not affect the version of java used. branch-2.10 still > sticks to JDK7 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17088) Failed to load Xinclude files with relative path in case of loading conf via URI
[ https://issues.apache.org/jira/browse/HADOOP-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143908#comment-17143908 ] Jonathan Turner Eagles commented on HADOOP-17088: - One important security feature is to disallow xml resources from outside of the classpath. Does this enforce this constraint? > Failed to load Xinclude files with relative path in case of loading conf via > URI > > > Key: HADOOP-17088 > URL: https://issues.apache.org/jira/browse/HADOOP-17088 > Project: Hadoop Common > Issue Type: Bug >Reporter: Yushi Hayasaka >Priority: Major > > When we create a configuration file, which load a external XML file with > relative path, and try to load it via calling `Configuration.addResource` > with `Path(URI)`, we got an error, which failed to load a external XML, after > https://issues.apache.org/jira/browse/HADOOP-14216 is merged. > {noformat} > Exception in thread "main" java.lang.RuntimeException: java.io.IOException: > Fetch fail on include for 'mountTable.xml' with no fallback while loading > 'file:/opt/hadoop/etc/hadoop/core-site.xml' > at > org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021) > at > org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973) > at > org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) > at > org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896) > at com.company.test.Main.main(Main.java:29) > Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' > with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' > at > org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3271) > at > org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3331) > at > org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) > at > org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) > ... 4 more > {noformat} > The cause is that the URI is passed as string to java.io.File constructor and > File does not support the file URI, so my suggestion is trying to convert > from string to URI at first. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] crossfire commented on pull request #2097: HADOOP-17088. Failed to load Xinclude files with relative path in cas…
crossfire commented on pull request #2097: URL: https://github.com/apache/hadoop/pull/2097#issuecomment-648868600 Hmm, it seems to be caused by https://issues.apache.org/jira/browse/HDFS-15424. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahmed Hussein updated HADOOP-17083: --- Attachment: HADOOP-17083-branch-2.10.003.patch > Update guava to 27.0-jre in hadoop branch-2.10 > -- > > Key: HADOOP-17083 > URL: https://issues.apache.org/jira/browse/HADOOP-17083 > Project: Hadoop Common > Issue Type: Bug > Components: common, security >Affects Versions: 2.10.0 >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17083-branch-2.10.001.patch, > HADOOP-17083-branch-2.10.002.patch, HADOOP-17083-branch-2.10.003.patch > > > com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found > [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237]. > > The upgrade should not affect the version of java used. branch-2.10 still > sticks to JDK7 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2097: HADOOP-17088. Failed to load Xinclude files with relative path in cas…
hadoop-yetus commented on pull request #2097: URL: https://github.com/apache/hadoop/pull/2097#issuecomment-648860632 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 28m 17s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 26m 41s | trunk passed | | +1 :green_heart: | compile | 21m 16s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 16m 52s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 58s | trunk passed | | +1 :green_heart: | mvnsite | 1m 27s | trunk passed | | +1 :green_heart: | shadedclient | 16m 35s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 45s | hadoop-common in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 2s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 2m 12s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 9s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 51s | the patch passed | | +1 :green_heart: | compile | 18m 47s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 18m 47s | the patch passed | | +1 :green_heart: | compile | 16m 48s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 16m 48s | the patch passed | | +1 :green_heart: | checkstyle | 0m 56s | the patch passed | | +1 :green_heart: | mvnsite | 1m 25s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 57s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 44s | hadoop-common in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 1m 4s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 2m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 25s | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 55s | The patch does not generate ASF License warnings. | | | | 185m 48s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2097/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2097 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ffb57d7a2de9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2097/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2097/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2097/1/testReport/ | | Max. process+thread count | 3255 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2097/1/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache
[jira] [Commented] (HADOOP-17077) S3A delegation token binding to support secondary binding list
[ https://issues.apache.org/jira/browse/HADOOP-17077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143848#comment-17143848 ] Steve Loughran commented on HADOOP-17077: - testing this this highlights that fetchdt and dtutil both expect single tokens in an FS. Fixing the S3A DT fetcher so that dtutil will retrieve all; filing HDFS-15435 and HDFS-15433 for the other fixes > S3A delegation token binding to support secondary binding list > -- > > Key: HADOOP-17077 > URL: https://issues.apache.org/jira/browse/HADOOP-17077 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > (followon from HADOOP-17050) > Add the ability of an S3A FS instance to support multiple instances of > delegation token bindings. > The property "fs.s3a.delegation.token.secondary.bindings" will list the > classnames of all secondary bindings. > for each one, an instance shall be created with the canonical service name > being: fs URI + [ tokenKind ]. This is to ensure that the URIs are unique for > each FS instance -but also that a single fs instance can have multiple tokens > in the credential list. > the instance is just a AbstractDelegationTokenBinding provider of an AWS > credential provider chain, with the normal lifecycle and operations to bind > to a DT, issue tokens, etc > * the final list of AWS Credential providers will be built by appending those > provided by each binding in turn. > Token binding at launch > If the primary token binding binds to a delegation token, then the whole > binding is changed such that all secondary tokens MUST also bind. That is: it > will be an error if one cannot be found. This is possibly overstrict-but it > avoids situations where an incomplete set of tokens are retrieved and This > does not surface until later. > Only the encryption secrets in the primary DT will be used for FS encryption > settings. > Testing: yes. > Probably also by adding a test-only DT provider which doesn't actually issue > any real credentials and so which can be deployed in both ITests and staging > tests where we can verify that the chained instantiation works. > Compatibility: the goal is to be backwards compatible with any already > released token provider plugin. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17050) S3A to support additional token issuers
[ https://issues.apache.org/jira/browse/HADOOP-17050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17050. - Resolution: Fixed > S3A to support additional token issuers > --- > > Key: HADOOP-17050 > URL: https://issues.apache.org/jira/browse/HADOOP-17050 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Gabor Bota >Assignee: Steve Loughran >Priority: Minor > Fix For: 3.3.1 > > > In > {{org.apache.hadoop.fs.s3a.auth.delegation.AbstractDelegationTokenBinding}} > the {{createDelegationToken}} should return a list of tokens. > With this functionality, the {{AbstractDelegationTokenBinding}} can get two > different tokens at the same time. > {{AbstractDelegationTokenBinding.TokenSecretManager}} should be extended to > retrieve secrets and lookup delegation tokens (use the public API for > secretmanager in hadoop) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17089) WASB: Update azure-storage-java SDK
Thomas Marqardt created HADOOP-17089: Summary: WASB: Update azure-storage-java SDK Key: HADOOP-17089 URL: https://issues.apache.org/jira/browse/HADOOP-17089 Project: Hadoop Common Issue Type: Bug Components: fs/azure Affects Versions: 3.2.0, 3.1.0, 3.0.0, 2.9.0, 2.8.0, 2.7.0 Reporter: Thomas Marqardt Assignee: Thomas Marqardt WASB depends on the Azure Storage Java SDK. There is a concurrency bug in the Azure Storage Java SDK that can cause the results of a list blobs operation to appear empty. This causes the Filesystem listStatus and similar APIs to return empty results. This has been seen in Spark work loads when jobs use more than one executor core. See [https://github.com/Azure/azure-storage-java/pull/546] for details on the bug in the Azure Storage SDK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17087) Add EC flag to stat commands
[ https://issues.apache.org/jira/browse/HADOOP-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143828#comment-17143828 ] Hadoop QA commented on HADOOP-17087: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 14s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 44s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}115m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16998/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17087 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13006334/HADOOP-17087.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 840440110f24 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16998/testReport/ | | Max. process+thread count | 2288 (vs. ulimit of 5500) | | modules
[jira] [Reopened] (HADOOP-17015) ABFS: Make PUT and POST operations idempotent
[ https://issues.apache.org/jira/browse/HADOOP-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Marqardt reopened HADOOP-17015: -- We should revisit PR 2021 and try to find a better solution for rename. Users expect Rename to be atomic. The service implementation is atomic, but we have this client-side idempotency issue. This fix relies on time and assumes that if the destination was recently updated while we are executing a retry policy, that we succeeded. This may not be the case. For example, users may rely on rename (with overwrite = false) of a file to synchronize or act like a distributed lock, so who ever renames successfully acquires the lock. With the fix in PR 2021, more than one caller could acquire this lock at the same time. Instead, I think we could allow the client to provide a UUID for the rename operation and persist this UUID in the metadata of the destination blob upon successful completion of a rename, then if we get into this idempotency issue and the client gets a 404 source does not exist, we can check the destination blob's metadata to see if the UUID is a match. > ABFS: Make PUT and POST operations idempotent > - > > Key: HADOOP-17015 > URL: https://issues.apache.org/jira/browse/HADOOP-17015 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Fix For: 3.4.0 > > > Currently when a PUT or POST operation timeouts and the server has already > successfully executed the operation, there is no check in driver to see if > the operation did succeed or not and just retries the same operation again. > This can cause driver to through invalid user errors. > > Sample scenario: > # Rename request times out. Though server has successfully executed the > operation. > # Driver retries rename and get source not found error. > In the scenario, driver needs to check if rename is being retried and success > if source if not found, but destination is present. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17054) ABFS: Fix idempotency test failures when SharedKey is set as AuthType
[ https://issues.apache.org/jira/browse/HADOOP-17054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Marqardt resolved HADOOP-17054. -- Resolution: Fixed Accidentally reactivated HADOOP-17015 but meant to reactivate HADOOP-17054. Please ignore previous comment. > ABFS: Fix idempotency test failures when SharedKey is set as AuthType > - > > Key: HADOOP-17054 > URL: https://issues.apache.org/jira/browse/HADOOP-17054 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Fix For: 3.4.0 > > > Idempotency related tests added as part of > https://issues.apache.org/jira/browse/HADOOP-17015 > create a test AbfsClient instance. This mock instance wrongly accepts valid > sharedKey and oauth token provider instance. This leads to test failures with > exceptions: > [ERROR] > testRenameRetryFailureAsHTTP404(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRename) > Time elapsed: 9.133 s <<< ERROR! > Invalid auth type: SharedKey is being used, expecting OAuth > at > org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getTokenProvider(AbfsConfiguration.java:643) > This Jira is to fix these tests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17086) Parsing errors in ABFS Driver with creation Time (being returned in ListPath)
[ https://issues.apache.org/jira/browse/HADOOP-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H reassigned HADOOP-17086: - Assignee: Bilahari T H > Parsing errors in ABFS Driver with creation Time (being returned in ListPath) > - > > Key: HADOOP-17086 > URL: https://issues.apache.org/jira/browse/HADOOP-17086 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ishani >Assignee: Bilahari T H >Priority: Major > > I am seeing errors while running ABFS Driver against stg75 build in canary. > This is related to parsing errors as we receive creationTIme in the ListPath > API. Here are the errors: > RestVersion: 2020-02-10 > mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify > -Dit.test=ITestAzureBlobFileSystemRenameUnicode > [ERROR] > testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode) > Time elapsed: 852.083 s <<< ERROR! > Status code: -1 error code: null error message: > InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException: > Unrecognized field "creationTime" (Class org.apache.hadoop. > .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable > at [Source: > [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048] > (through reference chain: > org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat > "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"]) > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273) > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188) > at > org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373) > at > org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at java.base/java.lang.Thread.run(Thread.java:834) > Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: > Unrecognized field "creationTime" (Class > org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema), not > marked as i > orable > at [Source: > [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048] > (through reference chain: > org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat > "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"]) > at > org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53) > at > org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:267) > at >
[GitHub] [hadoop] aajisaka opened a new pull request #2098: HDFS-15424. Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"
aajisaka opened a new pull request #2098: URL: https://github.com/apache/hadoop/pull/2098 This PR is to test https://issues.apache.org/jira/browse/YETUS-972 * Apply #2094 * Use https://github.com/apache/yetus/pull/112 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17087) Add EC flag to stat commands
[ https://issues.apache.org/jira/browse/HADOOP-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143785#comment-17143785 ] Hongbing Wang commented on HADOOP-17087: ok, I understand. I agree that no change is best. Thanks [~ayushtkn] for the patient guidance. > Add EC flag to stat commands > > > Key: HADOOP-17087 > URL: https://issues.apache.org/jira/browse/HADOOP-17087 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hongbing Wang >Priority: Major > Attachments: HADOOP-17087.001.patch > > > We currently do not have a brief way to judge an ec file. {{hdfs fsck}} can > do but shows too much information. Neither {{du}} nor {{ls}} can accurately > judge the ec file. > So I added ec flag to stat cli. > old result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > regular file > $ hadoop fs -stat "%F" /user/rep/rep.txt > regular file > {code} > new result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > erasure coding file > $ hadoop fs -stat "%F" /user/rep/rep.txt > replica file > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17087) Add EC flag to stat commands
[ https://issues.apache.org/jira/browse/HADOOP-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143782#comment-17143782 ] Ayush Saxena commented on HADOOP-17087: --- For just EC policy you can use getErasureCodingPolicy() or hdfs ec admin commands Check here : https://hadoop.apache.org/docs/r3.2.1/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html#Administrative_commands hdfs ec -getPolicy will help. I guess there is bunch of ways to get the desired result already and moreover you can't change the CLI output the command for F option. Changing CLI output is incompatible and isn't allowed, if required you need a new option for that, for which I don't think there is very strong need, provided we need to handle the cases for the FileSystem's which don't support EC and stuffs like that. > Add EC flag to stat commands > > > Key: HADOOP-17087 > URL: https://issues.apache.org/jira/browse/HADOOP-17087 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hongbing Wang >Priority: Major > Attachments: HADOOP-17087.001.patch > > > We currently do not have a brief way to judge an ec file. {{hdfs fsck}} can > do but shows too much information. Neither {{du}} nor {{ls}} can accurately > judge the ec file. > So I added ec flag to stat cli. > old result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > regular file > $ hadoop fs -stat "%F" /user/rep/rep.txt > regular file > {code} > new result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > erasure coding file > $ hadoop fs -stat "%F" /user/rep/rep.txt > replica file > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17088) Failed to load Xinclude files with relative path in case of loading conf via URI
[ https://issues.apache.org/jira/browse/HADOOP-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yushi Hayasaka updated HADOOP-17088: Description: When we create a configuration file, which load a external XML file with relative path, and try to load it via calling `Configuration.addResource` with `Path(URI)`, we got an error, which failed to load a external XML, after https://issues.apache.org/jira/browse/HADOOP-14216 is merged. {noformat} Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Fetch fail on include for 'mountTable.xml' with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896) at com.company.test.Main.main(Main.java:29) Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' at org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3271) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3331) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) ... 4 more {noformat} The cause is that the URI is passed as string to java.io.File constructor and File does not support the file URI, so my suggestion is trying to convert from string to URI at first. was: When we create a configuration file, which load a external XML file with relative path, and try to load it with calling `Configuration.addResource(URI)`, we got an error, which failed to load a external XML, after [https://issues.apache.org/jira/browse/HADOOP-14216] is merged. {noformat} Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Fetch fail on include for 'mountTable.xml' with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896) at com.company.test.Main.main(Main.java:29) Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' at org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3271) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3331) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) ... 4 more {noformat} The cause is that the URI is passed as string to java.io.File constructor and File does not support the file URI, so my suggestion is trying to convert from string to URI at first. > Failed to load Xinclude files with relative path in case of loading conf via > URI > > > Key: HADOOP-17088 > URL: https://issues.apache.org/jira/browse/HADOOP-17088 > Project: Hadoop Common > Issue Type: Bug >Reporter: Yushi Hayasaka >Priority: Major > > When we create a configuration file, which load a external XML file with > relative path, and try to load it via calling `Configuration.addResource` > with `Path(URI)`, we got an error, which failed to load a external XML, after > https://issues.apache.org/jira/browse/HADOOP-14216 is merged. > {noformat} > Exception in thread "main" java.lang.RuntimeException: java.io.IOException: > Fetch fail on include for 'mountTable.xml' with no fallback while loading > 'file:/opt/hadoop/etc/hadoop/core-site.xml' > at > org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021) > at > org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973) > at > org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) > at > org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896) > at com.company.test.Main.main(Main.java:29) > Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' > with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' > at >
[jira] [Updated] (HADOOP-17088) Failed to load Xinclude files with relative path in case of loading conf via URI
[ https://issues.apache.org/jira/browse/HADOOP-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yushi Hayasaka updated HADOOP-17088: Status: Patch Available (was: Open) PR: [https://github.com/apache/hadoop/pull/2097] > Failed to load Xinclude files with relative path in case of loading conf via > URI > > > Key: HADOOP-17088 > URL: https://issues.apache.org/jira/browse/HADOOP-17088 > Project: Hadoop Common > Issue Type: Bug >Reporter: Yushi Hayasaka >Priority: Major > > When we create a configuration file, which load a external XML file with > relative path, and try to load it with calling > `Configuration.addResource(URI)`, we got an error, which failed to load a > external XML, after [https://issues.apache.org/jira/browse/HADOOP-14216] is > merged. > {noformat} > Exception in thread "main" java.lang.RuntimeException: java.io.IOException: > Fetch fail on include for 'mountTable.xml' with no fallback while loading > 'file:/opt/hadoop/etc/hadoop/core-site.xml' > at > org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021) > at > org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973) > at > org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) > at > org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896) > at com.company.test.Main.main(Main.java:29) > Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' > with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' > at > org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3271) > at > org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3331) > at > org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) > at > org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) > ... 4 more > {noformat} > The cause is that the URI is passed as string to java.io.File constructor and > File does not support the file URI, so my suggestion is trying to convert > from string to URI at first. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17087) Add EC flag to stat commands
[ https://issues.apache.org/jira/browse/HADOOP-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143771#comment-17143771 ] Hongbing Wang commented on HADOOP-17087: Stat is a comprehensive description of the file. Maybe the ec flag should also be added in stat, I think. Do you [~ayushtkn] think it's necessary? > Add EC flag to stat commands > > > Key: HADOOP-17087 > URL: https://issues.apache.org/jira/browse/HADOOP-17087 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hongbing Wang >Priority: Major > Attachments: HADOOP-17087.001.patch > > > We currently do not have a brief way to judge an ec file. {{hdfs fsck}} can > do but shows too much information. Neither {{du}} nor {{ls}} can accurately > judge the ec file. > So I added ec flag to stat cli. > old result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > regular file > $ hadoop fs -stat "%F" /user/rep/rep.txt > regular file > {code} > new result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > erasure coding file > $ hadoop fs -stat "%F" /user/rep/rep.txt > replica file > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17087) Add EC flag to stat commands
[ https://issues.apache.org/jira/browse/HADOOP-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143769#comment-17143769 ] Hongbing Wang commented on HADOOP-17087: {quote} Ls with -e option {quote} Sorry i didn't notice this way before. Yahh~ it's good. Thanks [~ayushtkn] > Add EC flag to stat commands > > > Key: HADOOP-17087 > URL: https://issues.apache.org/jira/browse/HADOOP-17087 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hongbing Wang >Priority: Major > Attachments: HADOOP-17087.001.patch > > > We currently do not have a brief way to judge an ec file. {{hdfs fsck}} can > do but shows too much information. Neither {{du}} nor {{ls}} can accurately > judge the ec file. > So I added ec flag to stat cli. > old result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > regular file > $ hadoop fs -stat "%F" /user/rep/rep.txt > regular file > {code} > new result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > erasure coding file > $ hadoop fs -stat "%F" /user/rep/rep.txt > replica file > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] crossfire opened a new pull request #2097: HADOOP-17088. Failed to load Xinclude files with relative path in cas…
crossfire opened a new pull request #2097: URL: https://github.com/apache/hadoop/pull/2097 …e of loading conf via URI https://issues.apache.org/jira/browse/HADOOP-17088 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17088) Failed to load Xinclude files with relative path in case of loading conf via URI
Yushi Hayasaka created HADOOP-17088: --- Summary: Failed to load Xinclude files with relative path in case of loading conf via URI Key: HADOOP-17088 URL: https://issues.apache.org/jira/browse/HADOOP-17088 Project: Hadoop Common Issue Type: Bug Reporter: Yushi Hayasaka When we create a configuration file, which load a external XML file with relative path, and try to load it with calling `Configuration.addResource(URI)`, we got an error, which failed to load a external XML, after [https://issues.apache.org/jira/browse/HADOOP-14216] is merged. {noformat} Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Fetch fail on include for 'mountTable.xml' with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848) at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896) at com.company.test.Main.main(Main.java:29) Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml' at org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3271) at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3331) at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007) ... 4 more {noformat} The cause is that the URI is passed as string to java.io.File constructor and File does not support the file URI, so my suggestion is trying to convert from string to URI at first. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17087) Add EC flag to stat commands
[ https://issues.apache.org/jira/browse/HADOOP-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143759#comment-17143759 ] Ayush Saxena commented on HADOOP-17087: --- Thanx [~wanghongbing] for the report with Ls command do you mean using Ls with -e option? that gives the ec policy of the file if it is erasure coded. > Add EC flag to stat commands > > > Key: HADOOP-17087 > URL: https://issues.apache.org/jira/browse/HADOOP-17087 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hongbing Wang >Priority: Major > Attachments: HADOOP-17087.001.patch > > > We currently do not have a brief way to judge an ec file. {{hdfs fsck}} can > do but shows too much information. Neither {{du}} nor {{ls}} can accurately > judge the ec file. > So I added ec flag to stat cli. > old result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > regular file > $ hadoop fs -stat "%F" /user/rep/rep.txt > regular file > {code} > new result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > erasure coding file > $ hadoop fs -stat "%F" /user/rep/rep.txt > replica file > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17087) Add EC flag to stat commands
[ https://issues.apache.org/jira/browse/HADOOP-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HADOOP-17087: -- Status: Patch Available (was: Open) > Add EC flag to stat commands > > > Key: HADOOP-17087 > URL: https://issues.apache.org/jira/browse/HADOOP-17087 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hongbing Wang >Priority: Major > Attachments: HADOOP-17087.001.patch > > > We currently do not have a brief way to judge an ec file. {{hdfs fsck}} can > do but shows too much information. Neither {{du}} nor {{ls}} can accurately > judge the ec file. > So I added ec flag to stat cli. > old result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > regular file > $ hadoop fs -stat "%F" /user/rep/rep.txt > regular file > {code} > new result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > erasure coding file > $ hadoop fs -stat "%F" /user/rep/rep.txt > replica file > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17087) Add EC flag to stat commands
[ https://issues.apache.org/jira/browse/HADOOP-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hongbing Wang updated HADOOP-17087: --- Attachment: HADOOP-17087.001.patch > Add EC flag to stat commands > > > Key: HADOOP-17087 > URL: https://issues.apache.org/jira/browse/HADOOP-17087 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hongbing Wang >Priority: Major > Attachments: HADOOP-17087.001.patch > > > We currently do not have a brief way to judge an ec file. {{hdfs fsck}} can > do but shows too much information. Neither {{du}} nor {{ls}} can accurately > judge the ec file. > So I added ec flag to stat cli. > old result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > regular file > $ hadoop fs -stat "%F" /user/rep/rep.txt > regular file > {code} > new result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > erasure coding file > $ hadoop fs -stat "%F" /user/rep/rep.txt > replica file > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.
hadoop-yetus commented on pull request #2038: URL: https://github.com/apache/hadoop/pull/2038#issuecomment-648744924 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 23m 35s | trunk passed | | +1 :green_heart: | compile | 0m 41s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 32s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 21s | trunk passed | | +1 :green_heart: | mvnsite | 0m 37s | trunk passed | | +1 :green_heart: | shadedclient | 18m 23s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 33s | hadoop-aws in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 29s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 12s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 9s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 41s | the patch passed | | +1 :green_heart: | compile | 0m 39s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 39s | the patch passed | | +1 :green_heart: | compile | 0m 32s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 32s | the patch passed | | -0 :warning: | checkstyle | 0m 22s | hadoop-tools/hadoop-aws: The patch generated 5 new + 15 unchanged - 1 fixed = 20 total (was 16) | | +1 :green_heart: | mvnsite | 0m 36s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 17m 7s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 33s | hadoop-aws in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 27s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 1m 16s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 31s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 73m 38s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2038 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7e3adea5ae02 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/3/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/3/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/3/testReport/ | | Max. process+thread count | 454 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/3/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This
[jira] [Updated] (HADOOP-17087) Add EC flag to stat commands
[ https://issues.apache.org/jira/browse/HADOOP-17087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hongbing Wang updated HADOOP-17087: --- Description: We currently do not have a brief way to judge an ec file. {{hdfs fsck}} can do but shows too much information. Neither {{du}} nor {{ls}} can accurately judge the ec file. So I added ec flag to stat cli. old result: {code:java} $ hadoop fs -stat "%F" /user/ec/ec.txt regular file $ hadoop fs -stat "%F" /user/rep/rep.txt regular file {code} new result: {code:java} $ hadoop fs -stat "%F" /user/ec/ec.txt erasure coding file $ hadoop fs -stat "%F" /user/rep/rep.txt replica file {code} was: We currently do not have a brief way to judge an ec file. {{hdfs fsck}} can do but shows too much information. Neither {{du}} nor {{ls}} can accurately judge the ec file. So I added ec flag to stat cli. old result: {code:java} $ hadoop fs -stat "%F" /user/ec/ec.txt regular file $ hadoop fs -stat "%F" /user/rep/rep.txt regular file {code} new result: {code:java} $ hadoop fs -stat "%F" /user/ec/ec.txt erasure coding file $ hadoop fs -stat "%F" /user/rep/rep.txt replica file {code} > Add EC flag to stat commands > > > Key: HADOOP-17087 > URL: https://issues.apache.org/jira/browse/HADOOP-17087 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hongbing Wang >Priority: Major > > We currently do not have a brief way to judge an ec file. {{hdfs fsck}} can > do but shows too much information. Neither {{du}} nor {{ls}} can accurately > judge the ec file. > So I added ec flag to stat cli. > old result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > regular file > $ hadoop fs -stat "%F" /user/rep/rep.txt > regular file > {code} > new result: > {code:java} > $ hadoop fs -stat "%F" /user/ec/ec.txt > erasure coding file > $ hadoop fs -stat "%F" /user/rep/rep.txt > replica file > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17087) Add EC flag to stat commands
Hongbing Wang created HADOOP-17087: -- Summary: Add EC flag to stat commands Key: HADOOP-17087 URL: https://issues.apache.org/jira/browse/HADOOP-17087 Project: Hadoop Common Issue Type: Improvement Components: common Reporter: Hongbing Wang We currently do not have a brief way to judge an ec file. {{hdfs fsck}} can do but shows too much information. Neither {{du}} nor {{ls}} can accurately judge the ec file. So I added ec flag to stat cli. old result: {code:java} $ hadoop fs -stat "%F" /user/ec/ec.txt regular file $ hadoop fs -stat "%F" /user/rep/rep.txt regular file {code} new result: {code:java} $ hadoop fs -stat "%F" /user/ec/ec.txt erasure coding file $ hadoop fs -stat "%F" /user/rep/rep.txt replica file {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16818) ABFS: Combine append+flush calls for blockblob & appendblob
[ https://issues.apache.org/jira/browse/HADOOP-16818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani resolved HADOOP-16818. - Release Note: it was decided to drop the usage of feature/API in the driver. (Combined Calls). There is a separate JIRA for support of appendblob. Resolution: Won't Fix > ABFS: Combine append+flush calls for blockblob & appendblob > > > Key: HADOOP-16818 > URL: https://issues.apache.org/jira/browse/HADOOP-16818 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Bilahari T H >Assignee: Ishani >Priority: Minor > > Combine append+flush calls for blockblob & appendblob -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143712#comment-17143712 ] Hadoop QA commented on HADOOP-17083: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} branch-2.10 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 33s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 3s{color} | {color:green} branch-2.10 passed with JDK Oracle Corporation-1.7.0_95-b00 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 25s{color} | {color:green} branch-2.10 passed with JDK Private Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 58s{color} | {color:green} branch-2.10 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 4s{color} | {color:green} branch-2.10 passed with JDK Oracle Corporation-1.7.0_95-b00 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 58s{color} | {color:green} branch-2.10 passed with JDK Private Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 10s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 14s{color} | {color:blue} branch/hadoop-project no findbugs output file (findbugsXml.xml) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 52s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.10 has 14 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in branch-2.10 has 1 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 28s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2.10 has 10 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 8m 44s{color} | {color:red} hadoop-yarn-project/hadoop-yarn in branch-2.10 has 6 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in branch-2.10 has 1 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 8s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core in branch-2.10 has 3 extant findbugs warnings. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 16s{color} | {color:green} the patch passed with JDK Oracle Corporation-1.7.0_95-b00 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 16s{color} | {color:red} root-jdkOracleCorporation-1.7.0_95-b00 with JDK Oracle Corporation-1.7.0_95-b00 generated 12 new + 1434 unchanged - 1 fixed = 1446 total (was 1435) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 28s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 28s{color} | {color:red}
[GitHub] [hadoop] mukund-thakur commented on pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.
mukund-thakur commented on pull request #2038: URL: https://github.com/apache/hadoop/pull/2038#issuecomment-648702503 Latest commit fixes the above test failures. After multiple debugging efforts I thought I should run the tests without s3guard as well. So when I did that I see 8 tests failing all with file not found exception. I will debug these further. `[ERROR] Errors: [ERROR] ITestS3AContractGetFileStatus>AbstractContractGetFileStatusTest.testListFilesEmptyDirectoryNonrecursive:99->AbstractContractGetFileStatusTest.listFilesOnEmptyDir:119 » FileNotFound [ERROR] ITestS3AContractGetFileStatus>AbstractContractGetFileStatusTest.testListFilesEmptyDirectoryRecursive:104->AbstractContractGetFileStatusTest.listFilesOnEmptyDir:119 » FileNotFound [ERROR] ITestS3AContractGetFileStatusV1List>AbstractContractGetFileStatusTest.testListFilesEmptyDirectoryNonrecursive:99->AbstractContractGetFileStatusTest.listFilesOnEmptyDir:119 » FileNotFound [ERROR] ITestS3AContractGetFileStatusV1List>AbstractContractGetFileStatusTest.testListFilesEmptyDirectoryRecursive:104->AbstractContractGetFileStatusTest.listFilesOnEmptyDir:119 » FileNotFound [ERROR] ITestMagicCommitProtocol>AbstractITCommitProtocol.testCommitJobButNotTask:1004->AbstractITCommitProtocol.executeWork:552->AbstractITCommitProtocol.executeWork:568->AbstractITCommitProtocol.lambda$testCommitJobButNotTask$9:1010 » FileNotFound [INFO] [ERROR] Tests run: 1204, Failures: 0, Errors: 5, Skipped: 342` `[ERROR] Errors: [ERROR] ITestS3AContractRootDir.testListEmptyRootDirectory:82->AbstractContractRootDirectoryTest.testListEmptyRootDirectory:199 » FileNotFound [ERROR] ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testSimpleRootListing:239 » FileNotFound [ERROR] ITestS3AEncryptionSSEC.testListEncryptedDir:197 » FileNotFound No such file or... [INFO] [ERROR] Tests run: 110, Failures: 0, Errors: 3, Skipped: 87 ` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2096: HDFS-15312. Apply umask when creating directory by WebHDFS
hadoop-yetus commented on pull request #2096: URL: https://github.com/apache/hadoop/pull/2096#issuecomment-648694692 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | jshint | 0m 0s | jshint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 25m 6s | trunk passed | | +1 :green_heart: | shadedclient | 41m 59s | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 17s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 55s | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 61m 59s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2096/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2096 | | Optional Tests | dupname asflicense shadedclient jshint | | uname | Linux b4c6b4544bb1 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Max. process+thread count | 340 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2096/1/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
hadoop-yetus commented on pull request #2080: URL: https://github.com/apache/hadoop/pull/2080#issuecomment-648693848 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 21m 43s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 15s | trunk passed | | +1 :green_heart: | compile | 0m 40s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 36s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 25s | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | trunk passed | | +1 :green_heart: | shadedclient | 15m 12s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 29s | hadoop-hdfs-rbf in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 34s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 1m 10s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 7s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | the patch passed | | +1 :green_heart: | compile | 0m 32s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 32s | the patch passed | | +1 :green_heart: | compile | 0m 27s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 27s | the patch passed | | -0 :warning: | checkstyle | 0m 17s | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 2 new + 16 unchanged - 0 fixed = 18 total (was 16) | | +1 :green_heart: | mvnsite | 0m 30s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 37s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 26s | hadoop-hdfs-rbf in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 30s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 1m 12s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 8m 2s | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 90m 19s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterAllResolver | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2080 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b13a64c43c9e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/6/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/6/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/6/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt |
[GitHub] [hadoop] Hexiaoqiao commented on pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
Hexiaoqiao commented on pull request #2080: URL: https://github.com/apache/hadoop/pull/2080#issuecomment-648685825 Thanks @sunchao involve me here. In my internal version I try to turn off `getDatanodeReport` at Router side and not turn on webhdfs feature, The `getDatanodeReport` is very expensive for large cluster from my experience. For this PR, it is almost LGTM from my side, Please check the checkstyle jenkins report: https://builds.apache.org/job/hadoop-multibranch/job/PR-2080/5/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt. IMO it is better to add unit test to verify at Router for this improvement. FYI. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver
hadoop-yetus commented on pull request #2072: URL: https://github.com/apache/hadoop/pull/2072#issuecomment-648679739 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 11 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 54s | trunk passed | | +1 :green_heart: | compile | 0m 35s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 0m 31s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 23s | trunk passed | | +1 :green_heart: | mvnsite | 0m 34s | trunk passed | | +1 :green_heart: | shadedclient | 14m 42s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 30s | hadoop-azure in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 27s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 0m 53s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 51s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | the patch passed | | +1 :green_heart: | compile | 0m 27s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 25s | the patch passed | | +1 :green_heart: | checkstyle | 0m 17s | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 55s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 26s | hadoop-azure in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 0m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 27s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | The patch does not generate ASF License warnings. | | | | 62m 56s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2072 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8ce51c5519cc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/11/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/11/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/11/testReport/ | | Max. process+thread count | 439 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/11/console | | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message,
[GitHub] [hadoop] NickyYe opened a new pull request #2096: HDFS-15312. Apply umask when creating directory by WebHDFS
NickyYe opened a new pull request #2096: URL: https://github.com/apache/hadoop/pull/2096 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute https://issues.apache.org/jira/browse/HDFS-15312 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] NickyYe closed pull request #2095: HDFS-15417. Apply umask when creating directory by WebHDFS
NickyYe closed pull request #2095: URL: https://github.com/apache/hadoop/pull/2095 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] NickyYe opened a new pull request #2095: HDFS-15417. Apply umask when creating directory by WebHDFS
NickyYe opened a new pull request #2095: URL: https://github.com/apache/hadoop/pull/2095 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute https://issues.apache.org/jira/browse/HDFS-15312 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16659) ABFS: add missing docs for configuration
[ https://issues.apache.org/jira/browse/HADOOP-16659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H resolved HADOOP-16659. --- Resolution: Fixed > ABFS: add missing docs for configuration > > > Key: HADOOP-16659 > URL: https://issues.apache.org/jira/browse/HADOOP-16659 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.2 >Reporter: Da Zhou >Assignee: Bilahari T H >Priority: Major > > double-check the docs for ABFS and WASB configurations and add the missing > ones. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16659) ABFS: add missing docs for configuration
[ https://issues.apache.org/jira/browse/HADOOP-16659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143595#comment-17143595 ] Bilahari T H commented on HADOOP-16659: --- Duplicate task [HADOOP-17004|https://issues.apache.org/jira/browse/HADOOP-17004] > ABFS: add missing docs for configuration > > > Key: HADOOP-16659 > URL: https://issues.apache.org/jira/browse/HADOOP-16659 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.2 >Reporter: Da Zhou >Assignee: Bilahari T H >Priority: Major > > double-check the docs for ABFS and WASB configurations and add the missing > ones. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16659) ABFS: add missing docs for configuration
[ https://issues.apache.org/jira/browse/HADOOP-16659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H reassigned HADOOP-16659: - Assignee: Bilahari T H > ABFS: add missing docs for configuration > > > Key: HADOOP-16659 > URL: https://issues.apache.org/jira/browse/HADOOP-16659 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.2 >Reporter: Da Zhou >Assignee: Bilahari T H >Priority: Major > > double-check the docs for ABFS and WASB configurations and add the missing > ones. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16913) ABFS: Support for OAuth v2.0 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-16913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143587#comment-17143587 ] Bilahari T H edited comment on HADOOP-16913 at 6/24/20, 7:17 AM: - This has been checked in as part of the following JIRA: [HADOOP-16916|https://issues.apache.org/jira/browse/HADOOP-16916] was (Author: bilahari.th): This has been checked in as part of the following PR: [PR-1965|https://github.com/apache/hadoop/pull/1965] > ABFS: Support for OAuth v2.0 endpoints > --- > > Key: HADOOP-16913 > URL: https://issues.apache.org/jira/browse/HADOOP-16913 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Major > > Driver should upport v2.0 auth endpoints -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16913) ABFS: Support for OAuth v2.0 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-16913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143587#comment-17143587 ] Bilahari T H commented on HADOOP-16913: --- This has been checked in as part of the following PR: [PR-1965|https://github.com/apache/hadoop/pull/1965] > ABFS: Support for OAuth v2.0 endpoints > --- > > Key: HADOOP-16913 > URL: https://issues.apache.org/jira/browse/HADOOP-16913 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Major > > Driver should upport v2.0 auth endpoints -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16913) ABFS: Support for OAuth v2.0 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-16913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H resolved HADOOP-16913. --- Resolution: Fixed > ABFS: Support for OAuth v2.0 endpoints > --- > > Key: HADOOP-16913 > URL: https://issues.apache.org/jira/browse/HADOOP-16913 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Major > > Driver should upport v2.0 auth endpoints -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ishaniahuja commented on pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver
ishaniahuja commented on pull request #2072: URL: https://github.com/apache/hadoop/pull/2072#issuecomment-648638038 namespace, rest version:= 2018-11-09 Tests run: 84, Failures: 0, Errors: 0, Skipped: 0 Tests run: 443, Failures: 0, Errors: 0, Skipped: 42 Tests run: 207, Failures: 0, Errors: 0, Skipped: 24 -- non namespace, old rest version:= 2018-11-09 Tests run: 84, Failures: 0, Errors: 0, Skipped: 0 Tests run: 443, Failures: 0, Errors: 0, Skipped: 245 Tests run: 207, Failures: 0, Errors: 0, Skipped: 24 --- namepsace, rest version(2019-12-12), fs.azure.test.appendblob.enabled=true Tests run: 84, Failures: 0, Errors: 0, Skipped: 0 Tests run: 443, Failures: 0, Errors: 0, Skipped: 42 Tests run: 207, Failures: 0, Errors: 0, Skipped: 24 namepsace, rest version(2019-12-12), Tests run: 84, Failures: 0, Errors: 0, Skipped: 0 Tests run: 443, Failures: 0, Errors: 0, Skipped: 42 Tests run: 207, Failures: 0, Errors: 0, Skipped: 24 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] NickyYe commented on a change in pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
NickyYe commented on a change in pull request #2080: URL: https://github.com/apache/hadoop/pull/2080#discussion_r444685702 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java ## @@ -1748,4 +1828,58 @@ public void refreshSuperUserGroupsConfiguration() throws IOException { public String[] getGroupsForUser(String user) throws IOException { return routerProto.getGroupsForUser(user); } -} \ No newline at end of file + + /** + * Deals with loading datanode report into the cache and refresh. + */ + private class DatanodeReportCacheLoader + extends CacheLoader { + +private ListeningExecutorService executorService; + +DatanodeReportCacheLoader() { + ThreadFactory threadFactory = new ThreadFactoryBuilder() + .setNameFormat("DatanodeReport-Cache-Reload") + .setDaemon(true) + .build(); + + // Only use 1 thread to refresh cache. + // With coreThreadCount == maxThreadCount we effectively + // create a fixed size thread pool. As allowCoreThreadTimeOut + // has been set, all threads will die after 60 seconds of non use. + ThreadPoolExecutor parentExecutor = new ThreadPoolExecutor( + 1, + 1, + 60, + TimeUnit.SECONDS, + new LinkedBlockingQueue(), + threadFactory); + parentExecutor.allowCoreThreadTimeOut(true); + executorService = MoreExecutors.listeningDecorator(parentExecutor); +} + +@Override +public DatanodeInfo[] load(DatanodeReportType type) throws Exception { + return getCachedDatanodeReportImpl(type); +} + +/** + * Override the reload method to provide an asynchronous implementation, + * so that the query will not be slowed down by the cache refresh. It + * will return the old cache value and schedule a background refresh. + */ +@Override +public ListenableFuture reload( +final DatanodeReportType type, DatanodeInfo[] oldValue) +throws Exception { + ListenableFuture listenableFuture = Review comment: Thank you for the comments. I've addressed all of them. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17086) Parsing errors in ABFS Driver with creation Time (being returned in ListPath)
[ https://issues.apache.org/jira/browse/HADOOP-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani updated HADOOP-17086: Description: I am seeing errors while running ABFS Driver against stg75 build in canary. This is related to parsing errors as we receive creationTIme in the ListPath API. Here are the errors: RestVersion: 2020-02-10 mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify -Dit.test=ITestAzureBlobFileSystemRenameUnicode [ERROR] testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode) Time elapsed: 852.083 s <<< ERROR! Status code: -1 error code: null error message: InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field "creationTime" (Class org.apache.hadoop. .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable at [Source: [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048] (through reference chain: org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"]) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188) at org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373) at org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field "creationTime" (Class org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema), not marked as i orable at [Source: [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048] (through reference chain: org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"]) at org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53) at org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:267) at org.codehaus.jackson.map.deser.std.StdDeserializer.reportUnknownProperty(StdDeserializer.java:673) at org.codehaus.jackson.map.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:659) at org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:1365) at org.codehaus.jackson.map.deser.BeanDeserializer._handleUnknown(BeanDeserializer.java:725) at org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:703) at
[jira] [Created] (HADOOP-17086) Parsing errors in ABFS Driver with creation Time (being returned in ListPath)
Ishani created HADOOP-17086: --- Summary: Parsing errors in ABFS Driver with creation Time (being returned in ListPath) Key: HADOOP-17086 URL: https://issues.apache.org/jira/browse/HADOOP-17086 Project: Hadoop Common Issue Type: Sub-task Reporter: Ishani I am seeing errors while running ABFS Driver against stg75 build in canary. This is related to parsing errors as we receive creationTIme in the ListPath API. Here are the errors: mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify -Dit.test=ITestAzureBlobFileSystemRenameUnicode [ERROR] testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode) Time elapsed: 852.083 s <<< ERROR! Status code: -1 error code: null error message: InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field "creationTime" (Class org.apache.hadoop. .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable at [Source: [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796]; line: 1, column: 48] (through reference chain: org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"]) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188) at org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373) at org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field "creationTime" (Class org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema), not marked as i orable at [Source: [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796]; line: 1, column: 48] (through reference chain: org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"]) at org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53) at org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:267) at org.codehaus.jackson.map.deser.std.StdDeserializer.reportUnknownProperty(StdDeserializer.java:673) at org.codehaus.jackson.map.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:659) at org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:1365) at org.codehaus.jackson.map.deser.BeanDeserializer._handleUnknown(BeanDeserializer.java:725) at
[GitHub] [hadoop] NickyYe commented on pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
NickyYe commented on pull request #2080: URL: https://github.com/apache/hadoop/pull/2080#issuecomment-648622048 > Thanks @NickyYe for the update. Yes we can separate the report cache consolidation as a follow-up. Could you create a JIRA for that? Thanks. Filed: https://issues.apache.org/jira/browse/HDFS-15432 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on a change in pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
sunchao commented on a change in pull request #2080: URL: https://github.com/apache/hadoop/pull/2080#discussion_r444673117 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java ## @@ -361,6 +380,23 @@ public RouterRpcServer(Configuration configuration, Router router, this.nnProto = new RouterNamenodeProtocol(this); this.clientProto = new RouterClientProtocol(conf, this); this.routerProto = new RouterUserProtocol(this); + +long dnCacheExpire = conf.getTimeDuration( +DN_REPORT_CACHE_EXPIRE, +DN_REPORT_CACHE_EXPIRE_MS_DEFAULT, TimeUnit.MILLISECONDS); +this.dnCache = CacheBuilder.newBuilder() +.build(new DatanodeReportCacheLoader()); + +// Actively refresh the dn cache in a configured interval +Executors Review comment: I see. Makes sense. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2093: HDFS-15416. DataStorage#addStorageLocations() should add more reasonable information verification.
hadoop-yetus commented on pull request #2093: URL: https://github.com/apache/hadoop/pull/2093#issuecomment-648620311 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 22s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 23m 25s | trunk passed | | +1 :green_heart: | compile | 1m 29s | trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | compile | 1m 13s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 51s | trunk passed | | +1 :green_heart: | mvnsite | 1m 15s | trunk passed | | +1 :green_heart: | shadedclient | 18m 42s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 37s | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 45s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 3m 24s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 21s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 21s | the patch passed | | +1 :green_heart: | compile | 1m 27s | the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 | | +1 :green_heart: | javac | 1m 27s | the patch passed | | +1 :green_heart: | compile | 1m 13s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 1m 13s | the patch passed | | +1 :green_heart: | checkstyle | 0m 44s | the patch passed | | +1 :green_heart: | mvnsite | 1m 17s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 39s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 33s | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. | | +1 :green_heart: | javadoc | 0m 42s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 3m 36s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 117m 12s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | The patch does not generate ASF License warnings. | | | | 199m 25s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.TestMultipleNNPortQOP | | | hadoop.hdfs.TestStripedFileAppend | | | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | | | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.TestReconstructStripedFile | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2093/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2093 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3a55e90233dc 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 84110d850e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-2093/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt | | javadoc |
[GitHub] [hadoop] NickyYe commented on a change in pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
NickyYe commented on a change in pull request #2080: URL: https://github.com/apache/hadoop/pull/2080#discussion_r444668178 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java ## @@ -361,6 +380,23 @@ public RouterRpcServer(Configuration configuration, Router router, this.nnProto = new RouterNamenodeProtocol(this); this.clientProto = new RouterClientProtocol(conf, this); this.routerProto = new RouterUserProtocol(this); + +long dnCacheExpire = conf.getTimeDuration( +DN_REPORT_CACHE_EXPIRE, +DN_REPORT_CACHE_EXPIRE_MS_DEFAULT, TimeUnit.MILLISECONDS); +this.dnCache = CacheBuilder.newBuilder() +.build(new DatanodeReportCacheLoader()); + +// Actively refresh the dn cache in a configured interval +Executors Review comment: Yes. The point here is, with refreshAfterWrite, you will only get the previously value in this call, but the result will be refreshed in the background for next retreival. If we only have 1 request per hour, you will only get the datanode report 1 hour ago, unless you make the call sync, which is slow. Given it is already a background thread and not that heavy with an interval, current design is better. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on a change in pull request #2080: HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations
sunchao commented on a change in pull request #2080: URL: https://github.com/apache/hadoop/pull/2080#discussion_r444664594 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java ## @@ -868,6 +904,50 @@ public HdfsLocatedFileStatus getLocatedFileInfo(String src, return clientProto.getDatanodeReport(type); } + /** + * Get the datanode report from cache. + * + * @param type Type of the datanode. + * @return List of datanodes. + * @throws IOException If it cannot get the report. + */ + public DatanodeInfo[] getCachedDatanodeReport(DatanodeReportType type) Review comment: nit: this can be package-private? ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java ## @@ -1748,4 +1828,58 @@ public void refreshSuperUserGroupsConfiguration() throws IOException { public String[] getGroupsForUser(String user) throws IOException { return routerProto.getGroupsForUser(user); } -} \ No newline at end of file + + /** + * Deals with loading datanode report into the cache and refresh. + */ + private class DatanodeReportCacheLoader + extends CacheLoader { + +private ListeningExecutorService executorService; + +DatanodeReportCacheLoader() { + ThreadFactory threadFactory = new ThreadFactoryBuilder() Review comment: hmm can we just use: ```java executorService = MoreExecutors.listeningDecorator( Executors.newSingleThreadExecutor()); ``` ? ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java ## @@ -361,6 +380,23 @@ public RouterRpcServer(Configuration configuration, Router router, this.nnProto = new RouterNamenodeProtocol(this); this.clientProto = new RouterClientProtocol(conf, this); this.routerProto = new RouterUserProtocol(this); + +long dnCacheExpire = conf.getTimeDuration( +DN_REPORT_CACHE_EXPIRE, +DN_REPORT_CACHE_EXPIRE_MS_DEFAULT, TimeUnit.MILLISECONDS); +this.dnCache = CacheBuilder.newBuilder() +.build(new DatanodeReportCacheLoader()); + +// Actively refresh the dn cache in a configured interval +Executors Review comment: Hmm, have you considered using ```java this.dnCache = CacheBuilder.newBuilder() .refreshAfterWrite(dnCacheExpire, TimeUnit.MILLISECONDS) .build(new DatanodeReportCacheLoader()); ``` This will also automatically refresh the caches. Also it only refreshes a key iff 1) it becomes stale, and 2) there is a request on it. So this will save some calls for those infrequent DN report types. ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java ## @@ -1748,4 +1828,58 @@ public void refreshSuperUserGroupsConfiguration() throws IOException { public String[] getGroupsForUser(String user) throws IOException { return routerProto.getGroupsForUser(user); } -} \ No newline at end of file + + /** + * Deals with loading datanode report into the cache and refresh. + */ + private class DatanodeReportCacheLoader + extends CacheLoader { + +private ListeningExecutorService executorService; + +DatanodeReportCacheLoader() { + ThreadFactory threadFactory = new ThreadFactoryBuilder() + .setNameFormat("DatanodeReport-Cache-Reload") + .setDaemon(true) + .build(); + + // Only use 1 thread to refresh cache. + // With coreThreadCount == maxThreadCount we effectively + // create a fixed size thread pool. As allowCoreThreadTimeOut + // has been set, all threads will die after 60 seconds of non use. + ThreadPoolExecutor parentExecutor = new ThreadPoolExecutor( + 1, + 1, + 60, + TimeUnit.SECONDS, + new LinkedBlockingQueue(), + threadFactory); + parentExecutor.allowCoreThreadTimeOut(true); + executorService = MoreExecutors.listeningDecorator(parentExecutor); +} + +@Override +public DatanodeInfo[] load(DatanodeReportType type) throws Exception { + return getCachedDatanodeReportImpl(type); +} + +/** + * Override the reload method to provide an asynchronous implementation, + * so that the query will not be slowed down by the cache refresh. It + * will return the old cache value and schedule a background refresh. + */ +@Override +public ListenableFuture reload( +final DatanodeReportType type, DatanodeInfo[] oldValue) +throws Exception { + ListenableFuture listenableFuture = Review comment: nit: variable `listenableFuture` is redundant - you can just return from `submit` call. ## File path: