[GitHub] [hadoop] hadoop-yetus commented on pull request #2112: HDFS-15448.When starting a DataNode, call BlockPoolManager#startAll() twice.

2020-07-07 Thread GitBox


hadoop-yetus commented on pull request #2112:
URL: https://github.com/apache/hadoop/pull/2112#issuecomment-655301457


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  19m  8s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 41s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   1m  9s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  9s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   1m  1s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 39s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 32s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   2m 53s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  94m 40s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 183m 50s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2112/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2112 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b4da33040110 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4f26454a7d1 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2112/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2112/3/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2112/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2112/3/testReport/ |
   | Max. process+thread count | 4317 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

[jira] [Commented] (HADOOP-16862) [JDK11] Support JavaDoc

2020-07-07 Thread Ishani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153223#comment-17153223
 ] 

Ishani commented on HADOOP-16862:
-

[https://github.com/apache/hadoop/pull/2125]

> [JDK11] Support JavaDoc
> ---
>
> Key: HADOOP-16862
> URL: https://issues.apache.org/jira/browse/HADOOP-16862
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> This issue is to run {{mvn javadoc:javadoc}} successfully in Apache Hadoop 
> with Java 11.
> Now there are many errors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja edited a comment on pull request #2125: HADOOP-16966. ABFS: change rest version to 2019-12-12

2020-07-07 Thread GitBox


ishaniahuja edited a comment on pull request #2125:
URL: https://github.com/apache/hadoop/pull/2125#issuecomment-655274280


   yetus has a -1 because of javadoc and test4tests(as no new tests were added 
or existing modified). I created 2 production accounts - namespace and non 
namespace for testing and the test result are there in the PR.
   javadoc failure is happening in trunk (causing yetus -1). JIRA: 
https://issues.apache.org/jira/browse/HADOOP-16862



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on pull request #2125: HADOOP-16966. ABFS: change rest version to 2019-12-12

2020-07-07 Thread GitBox


ishaniahuja commented on pull request #2125:
URL: https://github.com/apache/hadoop/pull/2125#issuecomment-655274280


   yetus has a -1 because of javadoc and test4tests(as no new tests were added 
or existing modified). I created 2 production accounts - namespace and non 
namespace for testing and the test result are there in the PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on pull request #2123: ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…

2020-07-07 Thread GitBox


bilaharith commented on pull request #2123:
URL: https://github.com/apache/hadoop/pull/2123#issuecomment-655273139


   Javadoc is failing in the trunk as well. Please find the JIRA 
[HADOOP-17085](https://issues.apache.org/jira/browse/HADOOP-17085)
   
   **Driver test results using accounts in Central India**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   **Account with HNS Support**
   [INFO] Tests run: 65, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 436, Failures: 0, Errors: 0, Skipped: 74
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
   
   **Account without HNS support**
   [INFO] Tests run: 65, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 436, Failures: 0, Errors: 0, Skipped: 248
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15761) intermittent failure of TestAbfsClient.validateUserAgent

2020-07-07 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H resolved HADOOP-15761.
---
Resolution: Won't Fix

Please see [HADOOP-16922|https://issues.apache.org/jira/browse/HADOOP-16922]

> intermittent failure of TestAbfsClient.validateUserAgent
> 
>
> Key: HADOOP-15761
> URL: https://issues.apache.org/jira/browse/HADOOP-15761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: HADOOP-15407
> Environment: test suites run from IntelliJ IDEA
>Reporter: Steve Loughran
>Assignee: Bilahari T H
>Priority: Minor
>
> (seemingly intermittent) failure of the pattern matcher in 
> {{TestAbfsClient.validateUserAgent}}
> {code}
> java.lang.AssertionError: User agent Azure Blob FS/1.0 (JavaJRE 1.8.0_121; 
> MacOSX 10.13.6; openssl-1.0) Partner Service does not match regexp Azure Blob 
> FS\/1.0 \(JavaJRE ([^\)]+) SunJSSE-1.8\) Partner Service
> {code}
> Using a regexp is probably too brittle here: safest just to look for some 
> specific substring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2126: YARN-10344. Sync netty versions in hadoop-yarn-csi.

2020-07-07 Thread GitBox


hadoop-yetus commented on pull request #2126:
URL: https://github.com/apache/hadoop/pull/2126#issuecomment-655265616


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  21m 44s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  20m 32s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 29s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 50s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 29s |  hadoop-yarn-csi in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 33s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 23s |  hadoop-yarn-csi in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 33s |  hadoop-yarn-csi in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  78m 16s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2126/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2126 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 34d081a65189 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4f26454a7d1 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2126/1/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2126/1/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2126/1/testReport/ |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2126/1/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, 

[jira] [Commented] (HADOOP-16798) job commit failure in S3A MR magic committer test

2020-07-07 Thread Aaron Fabbri (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153199#comment-17153199
 ] 

Aaron Fabbri commented on HADOOP-16798:
---

I missed the party on this one, but just had a thought.. Did you consider 
inserting a failure point that hangs one of the commit threads when they POST 
data? Either delay the POST or the response? Would that make it easier to 
reproduce these cases?

Thanks for the fix.

> job commit failure in S3A MR magic committer test
> -
>
> Key: HADOOP-16798
> URL: https://issues.apache.org/jira/browse/HADOOP-16798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.1
>
> Attachments: stdout
>
>
> failure in 
> {code}
> ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
> job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e894de2 rejected from 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated,
>  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
> {code}
> Stack implies thread pool rejected it, but toString says "Terminated". Race 
> condition?
> *update 2020-04-22*: it's caused when a task is aborted in the AM -the 
> threadpool is disposed of, and while that is shutting down in one thread, 
> task commit is initiated using the same thread pool. When the task 
> committer's destroy operation times out, it kills all the active uploads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #2126: YARN-10344. Sync netty versions in hadoop-yarn-csi.

2020-07-07 Thread GitBox


aajisaka opened a new pull request #2126:
URL: https://github.com/apache/hadoop/pull/2126


   JIRA: https://issues.apache.org/jira/browse/YARN-10344
   
   Before:
   ```
   [INFO] --- maven-dependency-plugin:3.0.2:tree (default-cli) @ 
hadoop-yarn-csi ---
   [INFO] org.apache.hadoop:hadoop-yarn-csi:jar:3.3.0
   [INFO] +- com.google.guava:guava:jar:20.0:compile
   [INFO] +- com.google.protobuf:protobuf-java:jar:3.6.1:compile
   [INFO] +- io.netty:netty-all:jar:4.1.50.Final:compile
   [INFO] +- io.grpc:grpc-core:jar:1.26.0:compile
   [INFO] |  +- io.grpc:grpc-api:jar:1.26.0:compile (version selected from 
constraint [1.26.0,1.26.0])
   [INFO] |  |  +- io.grpc:grpc-context:jar:1.26.0:compile
   [INFO] |  |  +- 
com.google.errorprone:error_prone_annotations:jar:2.3.3:compile
   [INFO] |  |  \- org.codehaus.mojo:animal-sniffer-annotations:jar:1.17:compile
   [INFO] |  +- com.google.code.gson:gson:jar:2.2.4:compile
   [INFO] |  +- com.google.android:annotations:jar:4.1.1.4:compile
   [INFO] |  +- io.perfmark:perfmark-api:jar:0.19.0:compile
   [INFO] |  +- io.opencensus:opencensus-api:jar:0.24.0:compile
   [INFO] |  \- io.opencensus:opencensus-contrib-grpc-metrics:jar:0.24.0:compile
   [INFO] +- io.grpc:grpc-protobuf:jar:1.26.0:compile
   [INFO] |  +- 
com.google.api.grpc:proto-google-common-protos:jar:1.12.0:compile
   [INFO] |  \- io.grpc:grpc-protobuf-lite:jar:1.26.0:compile
   [INFO] +- io.grpc:grpc-stub:jar:1.26.0:compile
   [INFO] +- io.grpc:grpc-netty:jar:1.26.0:compile
   [INFO] |  +- io.netty:netty-codec-http2:jar:4.1.42.Final:compile (version 
selected from constraint [4.1.42.Final,4.1.42.Final])
   [INFO] |  |  +- io.netty:netty-common:jar:4.1.42.Final:compile
   [INFO] |  |  +- io.netty:netty-buffer:jar:4.1.42.Final:compile
   [INFO] |  |  +- io.netty:netty-transport:jar:4.1.42.Final:compile
   [INFO] |  |  |  \- io.netty:netty-resolver:jar:4.1.42.Final:compile
   [INFO] |  |  +- io.netty:netty-codec:jar:4.1.42.Final:compile
   [INFO] |  |  +- io.netty:netty-handler:jar:4.1.42.Final:compile
   [INFO] |  |  \- io.netty:netty-codec-http:jar:4.1.42.Final:compile
   [INFO] |  \- io.netty:netty-handler-proxy:jar:4.1.42.Final:compile
   [INFO] | \- io.netty:netty-codec-socks:jar:4.1.42.Final:compile
   (snip)
   ```
   
   After:
   ```
   [INFO] --- maven-dependency-plugin:3.0.2:tree (default-cli) @ 
hadoop-yarn-csi ---
   [INFO] org.apache.hadoop:hadoop-yarn-csi:jar:3.4.0-SNAPSHOT
   [INFO] +- com.google.guava:guava:jar:20.0:compile
   [INFO] +- com.google.protobuf:protobuf-java:jar:3.6.1:compile
   [INFO] +- io.netty:netty-all:jar:4.1.50.Final:compile
   [INFO] +- io.grpc:grpc-core:jar:1.26.0:compile
   [INFO] |  +- io.grpc:grpc-api:jar:1.26.0:compile (version selected from 
constraint [1.26.0,1.26.0])
   [INFO] |  |  +- io.grpc:grpc-context:jar:1.26.0:compile
   [INFO] |  |  +- 
com.google.errorprone:error_prone_annotations:jar:2.3.3:compile
   [INFO] |  |  \- org.codehaus.mojo:animal-sniffer-annotations:jar:1.17:compile
   [INFO] |  +- com.google.code.gson:gson:jar:2.2.4:compile
   [INFO] |  +- com.google.android:annotations:jar:4.1.1.4:compile
   [INFO] |  +- io.perfmark:perfmark-api:jar:0.19.0:compile
   [INFO] |  +- io.opencensus:opencensus-api:jar:0.24.0:compile
   [INFO] |  \- io.opencensus:opencensus-contrib-grpc-metrics:jar:0.24.0:compile
   [INFO] +- io.grpc:grpc-protobuf:jar:1.26.0:compile
   [INFO] |  +- 
com.google.api.grpc:proto-google-common-protos:jar:1.12.0:compile
   [INFO] |  \- io.grpc:grpc-protobuf-lite:jar:1.26.0:compile
   [INFO] +- io.grpc:grpc-stub:jar:1.26.0:compile
   [INFO] +- io.grpc:grpc-netty:jar:1.26.0:compile
   (snip)
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy

2020-07-07 Thread Hanisha Koneru (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-17116:

Attachment: HADOOP-17116.001.patch

> Skip Retry INFO logging on first failover from a proxy
> --
>
> Key: HADOOP-17116
> URL: https://issues.apache.org/jira/browse/HADOOP-17116
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HADOOP-17116.001.patch
>
>
> RetryInvocationHandler logs an INFO level message on every failover except 
> the first. This used to be ideal before when there were only 2 proxies in the 
> FailoverProxyProvider. But if there are more than 2 proxies (as is possible 
> with 3 or more NNs in HA), then there could be more than one failover to find 
> the currently active proxy.
> To avoid creating noise in clients logs/ console, RetryInvocationHandler 
> should skip logging once for each proxy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-07-07 Thread GitBox


fengnanli commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r451227559



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -726,4 +741,86 @@ public TokenIdent decodeTokenIdentifier(Token 
token) throws IOExcept
 return token.decodeIdentifier();
   }
 
+  /**
+   * Return top token real owners list as well as the tokens count.
+   *
+   * @param n top number of users
+   * @return map of owners to counts
+   */
+  public List getTopTokenRealOwners(int n) {
+n = Math.min(n, tokenOwnerStats.size());
+if (n == 0) {
+  return new ArrayList<>();
+}
+
+TopN topN = new TopN(n);
+for (Map.Entry entry : tokenOwnerStats.entrySet()) {
+  topN.offer(new NameValuePair(
+  entry.getKey(), entry.getValue()));
+}
+
+List list = new ArrayList<>();
+while (!topN.isEmpty()) {
+  list.add(topN.poll());
+}
+Collections.reverse(list);
+return list;
+  }
+
+  /**
+   * Return the real owner for a token. If this is a token from a proxy user,
+   * the real/effective user will be returned.
+   *
+   * @param id
+   * @return real owner
+   */
+  public String getTokenRealOwner(TokenIdent id) {
+String realUser;
+if (id.getRealUser() != null && !id.getRealUser().toString().isEmpty()) {
+  realUser = id.getRealUser().toString();
+} else {
+  // if there is no real user -> this is a non proxy user
+  // the user itself is the real owner
+  realUser = id.getUser().getUserName();
+}
+return realUser;
+  }
+
+  /**
+   * Add token stats to the owner to token count mapping.
+   *
+   * @param id
+   */
+  public void addTokenForOwnerStats(TokenIdent id) {
+String realOwner = getTokenRealOwner(id);
+tokenOwnerStats.put(realOwner,
+tokenOwnerStats.getOrDefault(realOwner, 0l)+1);
+  }
+
+  /**
+   * Remove token stats to the owner to token count mapping.
+   *
+   * @param id
+   */
+  public void removeTokenForOwnerStats(TokenIdent id) {
+String realOwner = getTokenRealOwner(id);
+if (tokenOwnerStats.containsKey(realOwner)) {
+  // unlikely to be less than 1 but in case
+  if (tokenOwnerStats.get(realOwner) <= 1) {

Review comment:
   The function is called from `createPassword` and `cancelToken` which are 
all synchronized so it is safe here. Similarly `currentTokens` is used in the 
pattern.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-07-07 Thread GitBox


fengnanli commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r451227291



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -578,6 +591,7 @@ public synchronized TokenIdent 
cancelToken(Token token,
 if (info == null) {
   throw new InvalidToken("Token not found " + formatTokenId(id));
 }
+removeTokenForOwnerStats(id);

Review comment:
   The reason i left the order as this is that the metric is a in-memory 
reflection `currentTokens` so it can be calculated once the memory data 
structure is changed.
   The procedure after is for persistent storage.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
##
@@ -657,4 +657,12 @@
 
   
 
+  
+dfs.federation.router.top.num.token.realowners
+10
+
+  The number of top real owners by tokens count to report in the JMX 
metrics.

Review comment:
   fixed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-07-07 Thread GitBox


fengnanli commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r451227069



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -726,4 +741,86 @@ public TokenIdent decodeTokenIdentifier(Token 
token) throws IOExcept
 return token.decodeIdentifier();
   }
 
+  /**
+   * Return top token real owners list as well as the tokens count.
+   *
+   * @param n top number of users
+   * @return map of owners to counts
+   */
+  public List getTopTokenRealOwners(int n) {
+n = Math.min(n, tokenOwnerStats.size());
+if (n == 0) {
+  return new ArrayList<>();
+}
+
+TopN topN = new TopN(n);
+for (Map.Entry entry : tokenOwnerStats.entrySet()) {
+  topN.offer(new NameValuePair(
+  entry.getKey(), entry.getValue()));
+}
+
+List list = new ArrayList<>();
+while (!topN.isEmpty()) {
+  list.add(topN.poll());
+}
+Collections.reverse(list);
+return list;
+  }
+
+  /**
+   * Return the real owner for a token. If this is a token from a proxy user,
+   * the real/effective user will be returned.
+   *
+   * @param id
+   * @return real owner
+   */
+  public String getTokenRealOwner(TokenIdent id) {
+String realUser;
+if (id.getRealUser() != null && !id.getRealUser().toString().isEmpty()) {
+  realUser = id.getRealUser().toString();
+} else {
+  // if there is no real user -> this is a non proxy user
+  // the user itself is the real owner
+  realUser = id.getUser().getUserName();
+}
+return realUser;
+  }
+
+  /**
+   * Add token stats to the owner to token count mapping.
+   *
+   * @param id
+   */
+  public void addTokenForOwnerStats(TokenIdent id) {
+String realOwner = getTokenRealOwner(id);
+tokenOwnerStats.put(realOwner,
+tokenOwnerStats.getOrDefault(realOwner, 0l)+1);
+  }
+
+  /**
+   * Remove token stats to the owner to token count mapping.
+   *
+   * @param id
+   */
+  public void removeTokenForOwnerStats(TokenIdent id) {

Review comment:
   this was fixed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-07-07 Thread GitBox


fengnanli commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r451227025



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -64,7 +69,13 @@ private String formatTokenId(TokenIdent id) {
*/
   protected final Map currentTokens 
   = new ConcurrentHashMap<>();
-  
+
+  /**
+   * Map of token real owners to its token count. This is used to generate
+   * top users by owned tokens.
+   */
+  protected final Map tokenOwnerStats = new 
ConcurrentHashMap<>();

Review comment:
   It depends on individual secret manager to initialize the 
`currentTokens`. In namenode, it is loading from edit log. In router, it is 
loading from ZK.
   I would have a separate ticket for namenode.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-07-07 Thread GitBox


fengnanli commented on pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#issuecomment-655221130


   github screwed up a lot of my comments and I will reply again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on a change in pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-07-07 Thread GitBox


fengnanli commented on a change in pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#discussion_r451226040



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -726,4 +741,86 @@ public TokenIdent decodeTokenIdentifier(Token 
token) throws IOExcept
 return token.decodeIdentifier();
   }
 
+  /**
+   * Return top token real owners list as well as the tokens count.
+   *
+   * @param n top number of users
+   * @return map of owners to counts
+   */
+  public List getTopTokenRealOwners(int n) {

Review comment:
   I will keep it as it is.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy

2020-07-07 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HADOOP-17116:
---

 Summary: Skip Retry INFO logging on first failover from a proxy
 Key: HADOOP-17116
 URL: https://issues.apache.org/jira/browse/HADOOP-17116
 Project: Hadoop Common
  Issue Type: Task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


RetryInvocationHandler logs an INFO level message on every failover except the 
first. This used to be ideal before when there were only 2 proxies in the 
FailoverProxyProvider. But if there are more than 2 proxies (as is possible 
with 3 or more NNs in HA), then there could be more than one failover to find 
the currently active proxy.

To avoid creating noise in clients logs/ console, RetryInvocationHandler should 
skip logging once for each proxy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate

2020-07-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153132#comment-17153132
 ] 

Hadoop QA commented on HADOOP-17099:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m  
8s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} branch/hadoop-build-tools no findbugs output file 
(findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} root: The patch generated 0 new + 101 unchanged - 5 
fixed = 101 total (was 106) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} hadoop-build-tools has no data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-build-tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 35s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 53s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
9s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
10s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m  8s{color} 
| {color:red} 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2124: HADOOP-17101. replace Guava Function

2020-07-07 Thread GitBox


hadoop-yetus commented on pull request #2124:
URL: https://github.com/apache/hadoop/pull/2124#issuecomment-655145549


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 20s |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 55s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  21m 39s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 30s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 33s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m 42s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 44s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 47s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-yarn-common in trunk failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   3m 49s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 46s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 28s |  branch/hadoop-build-tools no findbugs 
output file (findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 51s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  21m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  9s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  19m  9s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 48s |  root: The patch generated 1 new 
+ 67 unchanged - 3 fixed = 68 total (was 70)  |
   | +1 :green_heart: |  mvnsite  |   5m 57s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m 34s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 44s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 45s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 38s |  hadoop-yarn-common in the patch failed 
with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   4m 36s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  findbugs  |   0m 32s |  hadoop-build-tools has no data from 
findbugs  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 35s |  hadoop-build-tools in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  11m 40s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  | 130m 52s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   4m 44s |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   8m 33s |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 378m 39s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2124/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2124 |
   | Optional Tests | dupname 

[jira] [Commented] (HADOOP-17101) Replace Guava Function with Java8+ Function

2020-07-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153093#comment-17153093
 ] 

Hadoop QA commented on HADOOP-17101:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
19s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
25s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} branch/hadoop-build-tools no findbugs output file 
(findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 37s{color} | {color:orange} root: The patch generated 1 new + 67 unchanged - 
3 fixed = 68 total (was 70) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} hadoop-build-tools has no data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-build-tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 48s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m  
8s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | 

[jira] [Commented] (HADOOP-16830) Add public IOStatistics API; S3A to support

2020-07-07 Thread Luca Canali (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153056#comment-17153056
 ] 

Luca Canali commented on HADOOP-16830:
--

Thanks, that looks quite useful and promising. I'll test it and hopefully 
provide some more meaningful feedback (although it will take another couple of 
weeks for me to do that).

> Add public IOStatistics API; S3A to support
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala  can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2123: ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…

2020-07-07 Thread GitBox


hadoop-yetus commented on pull request #2123:
URL: https://github.com/apache/hadoop/pull/2123#issuecomment-655071803


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 44s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 28s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 51s |  trunk passed  |
   | -0 :warning: |  patch  |   1m  9s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 31s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 23s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 19s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  65m 50s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2123 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux cfd8da72cceb 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4f26454a7d1 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/3/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/3/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/3/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/3/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2125: HADOOP-16966. ABFS: change rest version to 2019-12-12

2020-07-07 Thread GitBox


hadoop-yetus commented on pull request #2125:
URL: https://github.com/apache/hadoop/pull/2125#issuecomment-655059920


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  30m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  26m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  0s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 30s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 52s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 25s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   1m  4s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 35s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 104m 27s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2125/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2125 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4dbd1e02c81a 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4f26454a7d1 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2125/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2125/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2125/1/testReport/ |
   | Max. process+thread count | 308 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2125/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



[jira] [Commented] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System

2020-07-07 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17152970#comment-17152970
 ] 

Brahma Reddy Battula commented on HADOOP-16492:
---

[~zhongjun] thanks for uploading the design doc..

At first glance 
 * Looks the docuement states diff between s3a and obs but I didn't any feature 
difference and perfomance difference. are you planing update on that..?
 * looks it's more approiarte to move under "hadoop-cloud-storage-project" 
instead of the tools.?

 * All tests against the live store should have the prefix ITest, not Test, and 
set up so that they run in the mvn verify stage. And all which can run in 
parallel do so, for performance. Look at hadoop-azure and hadoop-aws for 
examples here.
 * Please share acount details, so that we can execute the some testcase.

[~junping_du] could look into this, as you recently worked on tencentcloud cos.

> Support HuaweiCloud Object Storage as a Hadoop Backend File System
> --
>
> Key: HADOOP-16492
> URL: https://issues.apache.org/jira/browse/HADOOP-16492
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.4.0
>Reporter: zhongjun
>Priority: Major
> Attachments: Difference Between OBSA and S3A.pdf, 
> HADOOP-16492.001.patch, HADOOP-16492.002.patch, HADOOP-16492.003.patch, 
> HADOOP-16492.004.patch, HADOOP-16492.005.patch, HADOOP-16492.006.patch, 
> HADOOP-16492.007.patch, HADOOP-16492.008.patch, HADOOP-16492.009.patch, 
> HADOOP-16492.010.patch, HADOOP-16492.011.patch, OBSA HuaweiCloud OBS Adapter 
> for Hadoop Support.pdf
>
>
> Added support for HuaweiCloud OBS 
> ([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop file system, 
> just like what we do before for S3, ADLS, OSS, etc. With simple 
> configuration, Hadoop applications can read/write data from OBS without any 
> code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.

2020-07-07 Thread GitBox


xiaoyuyao commented on pull request #2085:
URL: https://github.com/apache/hadoop/pull/2085#issuecomment-655016953


   @jojochuang can you help take a look at this change? Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja opened a new pull request #2125: HADOOP-16966. ABFS: change rest version to 2019-12-12

2020-07-07 Thread GitBox


ishaniahuja opened a new pull request #2125:
URL: https://github.com/apache/hadoop/pull/2125


   The PR changes the RestVersion to 2019-12-12 while sending requests from 
ABFS Driver. The change was tested with namespace and non namespace account in 
production. Whats missing? Documentation for the appendblob directories config 
parameter as the backend tenants are not enabled/right release is not present.
   
   Here are the test results:
   
   namespace enabled
   Tests run: 85, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 447, Failures: 0, Errors: 0, Skipped: 42
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   non namespace enabled:
   Tests run: 85, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 447, Failures: 0, Errors: 0, Skipped: 245
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16966) ABFS: Enable new Rest Version and add documentation for appendblob

2020-07-07 Thread Ishani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishani updated HADOOP-16966:

Description: 
When the new RestVersion(2019-12-12) is enabled in the backend, enable that in 
the driver along with the documentation for the appendblob.key config values 
which are possible with the new RestVersion.

 Configs:

fs.azure.appendblob.directories

 

  was:
When the new RestVersion(2019-02-10) is enabled in the backend, enable that in 
the driver along with the documentation for the appendblob.key config values 
which are possible with the new RestVersion.

 Configs:

fs.azure.enable.appendwithflush

fs.azure.appendblob.key

 


> ABFS: Enable new Rest Version and add documentation for appendblob
> --
>
> Key: HADOOP-16966
> URL: https://issues.apache.org/jira/browse/HADOOP-16966
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Ishani
>Assignee: Ishani
>Priority: Major
>
> When the new RestVersion(2019-12-12) is enabled in the backend, enable that 
> in the driver along with the documentation for the appendblob.key config 
> values which are possible with the new RestVersion.
>  Configs:
> fs.azure.appendblob.directories
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16966) ABFS: Enable new Rest Version and add documentation for appendblob

2020-07-07 Thread Ishani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishani updated HADOOP-16966:

Summary: ABFS: Enable new Rest Version and add documentation for appendblob 
 (was: ABFS: Enable new Rest Version and add documentation for appendblob and 
appendWIthFlush config parameters.)

> ABFS: Enable new Rest Version and add documentation for appendblob
> --
>
> Key: HADOOP-16966
> URL: https://issues.apache.org/jira/browse/HADOOP-16966
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Ishani
>Assignee: Ishani
>Priority: Major
>
> When the new RestVersion(2019-02-10) is enabled in the backend, enable that 
> in the driver along with the documentation for the appendblob.key config 
> values which are possible with the new RestVersion.
>  Configs:
> fs.azure.enable.appendwithflush
> fs.azure.appendblob.key
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17058) Support for Appendblob in abfs driver

2020-07-07 Thread Ishani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishani resolved HADOOP-17058.
-
Resolution: Fixed

> Support for Appendblob in abfs driver
> -
>
> Key: HADOOP-17058
> URL: https://issues.apache.org/jira/browse/HADOOP-17058
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Ishani
>Assignee: Ishani
>Priority: Major
>
> add changes to support appendblob in the hadoop-azure abfs driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein closed pull request #2124: HADOOP-17101. replace Guava Function

2020-07-07 Thread GitBox


amahussein closed pull request #2124:
URL: https://github.com/apache/hadoop/pull/2124


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17101) Replace Guava Function with Java8+ Function

2020-07-07 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17101:
---
Attachment: HADOOP-17101.003.patch

> Replace Guava Function with Java8+ Function
> ---
>
> Key: HADOOP-17101
> URL: https://issues.apache.org/jira/browse/HADOOP-17101
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17101.001.patch, HADOOP-17101.002.patch, 
> HADOOP-17101.003.patch
>
>
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Function'
> Found Occurrences  (7 usages found)
> hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff  (1 usage found)
> Apache_Hadoop_HDFS_2.6.0.xml  (1 usage found)
> 13603  type="com.google.common.base.Function"
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> HostSet.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.datanode.checker  (1 usage found)
> AbstractFuture.java  (1 usage found)
> 58 * (ListenableFuture, com.google.common.base.Function) 
> Futures.transform}
> org.apache.hadoop.hdfs.server.namenode.ha  (1 usage found)
> HATestUtil.java  (1 usage found)
> 40 import com.google.common.base.Function;
> org.apache.hadoop.hdfs.server.protocol  (1 usage found)
> RemoteEditLog.java  (1 usage found)
> 20 import com.google.common.base.Function;
> org.apache.hadoop.mapreduce.lib.input  (1 usage found)
> TestFileInputFormat.java  (1 usage found)
> 58 import com.google.common.base.Function;
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb  (1 usage found)
> GetApplicationsRequestPBImpl.java  (1 usage found)
> 38 import com.google.common.base.Function;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate

2020-07-07 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17099:
---
Attachment: HADOOP-17099.003.patch

> Replace Guava Predicate with Java8+ Predicate
> -
>
> Key: HADOOP-17099
> URL: https://issues.apache.org/jira/browse/HADOOP-17099
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Minor
> Attachments: HADOOP-17099.001.patch, HADOOP-17099.002.patch, 
> HADOOP-17099.003.patch
>
>
> {{com.google.common.base.Predicate}} can be replaced with 
> {{java.util.function.Predicate}}. 
> The change involving 9 occurrences is straightforward:
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Predicate' in project with mask 
> '*.java'
> Found Occurrences  (9 usages found)
> org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
> CombinedHostFileManager.java  (1 usage found)
> 43 import com.google.common.base.Predicate;
> org.apache.hadoop.hdfs.server.namenode  (1 usage found)
> NameNodeResourceChecker.java  (1 usage found)
> 38 import com.google.common.base.Predicate;
> org.apache.hadoop.hdfs.server.namenode.snapshot  (1 usage found)
> Snapshot.java  (1 usage found)
> 41 import com.google.common.base.Predicate;
> org.apache.hadoop.metrics2.impl  (2 usages found)
> MetricsRecords.java  (1 usage found)
> 21 import com.google.common.base.Predicate;
> TestMetricsSystemImpl.java  (1 usage found)
> 41 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation  (1 usage found)
> AggregatedLogFormat.java  (1 usage found)
> 77 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation.filecontroller  (1 usage found)
> LogAggregationFileController.java  (1 usage found)
> 22 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile  (1 usage 
> found)
> LogAggregationIndexedFileController.java  (1 usage found)
> 22 import com.google.common.base.Predicate;
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation 
>  (1 usage found)
> AppLogAggregatorImpl.java  (1 usage found)
> 75 import com.google.common.base.Predicate;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2123: ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…

2020-07-07 Thread GitBox


hadoop-yetus commented on pull request #2123:
URL: https://github.com/apache/hadoop/pull/2123#issuecomment-654957447


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 17s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  25m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 40s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 25s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 58s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 16s |  hadoop-tools/hadoop-azure: The 
patch generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 4 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  15m 56s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 25s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   1m  5s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 33s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  72m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2123 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux b503ac2358ea 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2bbd00dff49 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/2/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/2/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/2/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/2/testReport/ |
   | Max. process+thread count | 338 (vs. ulimit of 5500) |
   | 

[jira] [Created] (HADOOP-17115) Replace Guava initialization of Sets.newHashSet

2020-07-07 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17115:
--

 Summary: Replace Guava initialization of Sets.newHashSet
 Key: HADOOP-17115
 URL: https://issues.apache.org/jira/browse/HADOOP-17115
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein


Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
replaced by Java APIs.


{code:java}
Targets
Occurrences of 'Sets.newHashSet' in project
Found Occurrences  (223 usages found)
org.apache.hadoop.crypto.key  (2 usages found)
TestValueQueue.java  (2 usages found)
testWarmUp()  (2 usages found)
106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
107 Sets.newHashSet(fillInfos[0].key,
org.apache.hadoop.crypto.key.kms  (6 usages found)
TestLoadBalancingKMSClientProvider.java  (6 usages found)
testCreation()  (6 usages found)
86 
assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;),
87 Sets.newHashSet(providers[0].getKMSUrl()));
95 assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;,
98 Sets.newHashSet(providers[0].getKMSUrl(),
108 
assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;,
111 Sets.newHashSet(providers[0].getKMSUrl(),
org.apache.hadoop.crypto.key.kms.server  (1 usage found)
KMSAudit.java  (1 usage found)
59 static final Set AGGREGATE_OPS_WHITELIST = 
Sets.newHashSet(
org.apache.hadoop.fs.s3a  (1 usage found)
TestS3AAWSCredentialsProvider.java  (1 usage found)
testFallbackToDefaults()  (1 usage found)
183 Sets.newHashSet());
org.apache.hadoop.fs.s3a.auth  (1 usage found)
AssumedRoleCredentialProvider.java  (1 usage found)
AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
113 Sets.newHashSet(this.getClass()));
org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
ITestS3ACommitterMRJob.java  (1 usage found)
test_200_execute()  (1 usage found)
232 Set expectedKeys = Sets.newHashSet();
org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
TestStagingCommitter.java  (3 usages found)
testSingleTaskMultiFileCommit()  (1 usage found)
341 Set keys = Sets.newHashSet();
runTasks(JobContext, int, int)  (1 usage found)
603 Set uploads = Sets.newHashSet();
commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
found)
640 Set files = Sets.newHashSet();
TestStagingPartitionedTaskCommit.java  (2 usages found)
verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
148 Set files = Sets.newHashSet();
buildExpectedList(StagingCommitter)  (1 usage found)
188 Set expected = Sets.newHashSet();
org.apache.hadoop.hdfs  (5 usages found)
DFSUtil.java  (2 usages found)
getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
615 Set availableNameServices = Sets.newHashSet(conf
getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage found)
660 Set availableNameServices = Sets.newHashSet(conf
MiniDFSCluster.java  (1 usage found)
597 private Set fileSystems = Sets.newHashSet();
TestDFSUtil.java  (2 usages found)
testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
1046 assertEquals(Sets.newHashSet("nn1"), internal);
1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
org.apache.hadoop.hdfs.net  (5 usages found)
TestDFSNetworkTopology.java  (5 usages found)
testChooseRandomWithStorageType()  (4 usages found)
277 Sets.newHashSet("host2", "host4", "host5", "host6");
278 Set archiveUnderL1 = Sets.newHashSet("host1", 
"host3");
279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
280 Set ssdUnderL1 = Sets.newHashSet("host8");
testChooseRandomWithStorageTypeWithExcluded()  (1 usage found)
363 Set expectedSet = Sets.newHashSet("host4", "host5");
org.apache.hadoop.hdfs.qjournal.server  (2 usages found)
JournalNodeSyncer.java  (2 usages found)
getOtherJournalNodeAddrs()  (1 usage found)
276 HashSet sharedEditsUri = Sets.newHashSet();
getJournalAddrList(String)  (1 usage found)
318 Sets.newHashSet(jn.getBoundIpcAddress()));
org.apache.hadoop.hdfs.server.datanode  (5 usages found)
BlockPoolManager.java  (1 usage found)
doRefreshNamenodes(Map>, 
Map>)  (1 usage found)
198 toRemove = Sets.newHashSet(Sets.difference(

[jira] [Created] (HADOOP-17114) Replace Guava initialization of Lists.newArrayList

2020-07-07 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17114:
--

 Summary: Replace Guava initialization of Lists.newArrayList
 Key: HADOOP-17114
 URL: https://issues.apache.org/jira/browse/HADOOP-17114
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein


There are unjustified use of Guava APIs to initialize Lists. This could be 
simply replaced by Java API.


{code:java}
Targets
Occurrences of 'Lists.newArrayList' in project
Found Occurrences  (787 usages found)
org.apache.hadoop.conf  (2 usages found)
TestReconfiguration.java  (2 usages found)
testAsyncReconfigure()  (1 usage found)
391 List changes = Lists.newArrayList();
testStartReconfigurationFailureDueToExistingRunningTask()  (1 usage 
found)
435 List changes = Lists.newArrayList(
org.apache.hadoop.crypto  (1 usage found)
CryptoCodec.java  (1 usage found)
getCodecClasses(Configuration, CipherSuite)  (1 usage found)
107 List> result = 
Lists.newArrayList();
org.apache.hadoop.fs.azurebfs  (84 usages found)
ITestAbfsIdentityTransformer.java  (7 usages found)
transformAclEntriesForSetRequest()  (3 usages found)
240 List aclEntriesToBeTransformed = 
Lists.newArrayList(
253 List aclEntries = 
Lists.newArrayList(aclEntriesToBeTransformed);
271 List expectedAclEntries = Lists.newArrayList(
transformAclEntriesForGetRequest()  (4 usages found)
291 List aclEntriesToBeTransformed = 
Lists.newArrayList(
302 List aclEntries = 
Lists.newArrayList(aclEntriesToBeTransformed);
318 aclEntries = Lists.newArrayList(aclEntriesToBeTransformed);
322 List expectedAclEntries = Lists.newArrayList(
ITestAzureBlobFilesystemAcl.java  (76 usages found)
testModifyAclEntries()  (2 usages found)
95 List aclSpec = Lists.newArrayList(
103 aclSpec = Lists.newArrayList(
testModifyAclEntriesOnlyAccess()  (2 usages found)
128 List aclSpec = Lists.newArrayList(
134 aclSpec = Lists.newArrayList(
testModifyAclEntriesOnlyDefault()  (2 usages found)
151 List aclSpec = Lists.newArrayList(
154 aclSpec = Lists.newArrayList(
testModifyAclEntriesMinimal()  (1 usage found)
175 List aclSpec = Lists.newArrayList(
testModifyAclEntriesMinimalDefault()  (1 usage found)
192 List aclSpec = Lists.newArrayList(
testModifyAclEntriesCustomMask()  (1 usage found)
213 List aclSpec = Lists.newArrayList(
testModifyAclEntriesStickyBit()  (2 usages found)
231 List aclSpec = Lists.newArrayList(
238 aclSpec = Lists.newArrayList(
testModifyAclEntriesPathNotFound()  (1 usage found)
261 List aclSpec = Lists.newArrayList(
testModifyAclEntriesDefaultOnFile()  (1 usage found)
276 List aclSpec = Lists.newArrayList(
testModifyAclEntriesWithDefaultMask()  (2 usages found)
287 List aclSpec = Lists.newArrayList(
291 List modifyAclSpec = Lists.newArrayList(
testModifyAclEntriesWithAccessMask()  (2 usages found)
311 List aclSpec = Lists.newArrayList(
315 List modifyAclSpec = Lists.newArrayList(
testModifyAclEntriesWithDuplicateEntries()  (2 usages found)
332 List aclSpec = Lists.newArrayList(
336 List modifyAclSpec = Lists.newArrayList(
testRemoveAclEntries()  (2 usages found)
348 List aclSpec = Lists.newArrayList(
355 aclSpec = Lists.newArrayList(
testRemoveAclEntriesOnlyAccess()  (2 usages found)
377 List aclSpec = Lists.newArrayList(
384 aclSpec = Lists.newArrayList(
testRemoveAclEntriesOnlyDefault()  (2 usages found)
401 List aclSpec = Lists.newArrayList(
408 aclSpec = Lists.newArrayList(
testRemoveAclEntriesMinimal()  (2 usages found)
429 List aclSpec = Lists.newArrayList(
435 aclSpec = Lists.newArrayList(
testRemoveAclEntriesMinimalDefault()  (2 usages found)
451 List aclSpec = Lists.newArrayList(
458 aclSpec = Lists.newArrayList(
testRemoveAclEntriesStickyBit()  (2 usages found)
479 List aclSpec = Lists.newArrayList(
486 aclSpec = Lists.newArrayList(
testRemoveAclEntriesPathNotFound()  (1 usage found)
507 List aclSpec = Lists.newArrayList(
testRemoveAclEntriesAccessMask()  (2 usages found)
518 List 

[GitHub] [hadoop] amahussein opened a new pull request #2124: HADOOP-17001. replace Guava Function

2020-07-07 Thread GitBox


amahussein opened a new pull request #2124:
URL: https://github.com/apache/hadoop/pull/2124


   - I have added a rule to prevent import of Guava `Function`, `Multimaps` and 
`ImmutableListMultimap`
   - I added a new package `org.apache.hadoop.util.noguava` that will be the 
place holder for all utilities that we need to replace Guava API



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-07 Thread GitBox


mukund-thakur commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r450905137



##
File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/iostatistics.md
##
@@ -0,0 +1,432 @@
+
+
+# Statistic collection with the IOStatistics API
+
+```java
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+```
+
+The `IOStatistics` API is intended to provide statistics on individual IO
+classes -such as input and output streams, *in a standard way which 
+applications can query*
+
+Many filesystem-related classes have implemented statistics gathering
+and provided private/unstable ways to query this, but as they were
+not common across implementations it was unsafe for applications
+to reference these values. Example: `S3AInputStream` and its statistics
+API. This is used in internal tests, but cannot be used downstream in
+applications such as Apache Hive or Apache HBase.
+
+The IOStatistics API is intended to 
+
+1. Be instance specific:, rather than shared across multiple instances
+   of a class, or thread local.
+1. Be public and stable enough to be used by applications.
+1. Be easy to use in applications written in Java, Scala, and, via libhdfs, 
C/C++
+1. Have foundational interfaces and classes in the `hadoop-common` JAR.
+
+## Core Model
+
+Any class *may* implement `IOStatisticsSource` in order to
+provide statistics.
+
+Wrapper I/O Classes such as `FSDataInputStream` anc `FSDataOutputStream` 
*should*
+implement the interface and forward it to the wrapped class, if they also
+implement it -and return `null` if they do not.
+
+`IOStatisticsSource` implementations `getIOStatistics()` return an
+instance of `IOStatistics` enumerating the statistics of that specific
+instance.
+
+The `IOStatistics` Interface exports five kinds of statistic:
+
+
+| Category | Type | Description |
+|--|--|-|
+| `counter`| `long`  | a counter which may increase in value; 
SHOULD BE >= 0 |
+| `gauge`  | `long`  | an arbitrary value which can down as 
well as up; SHOULD BE >= 0|
+| `minimum`| `long`  | an minimum value; MAY BE negative |
+| `maximum`| `long`  | a maximum value;  MAY BE negative |
+| `meanStatistic` | `MeanStatistic` | an arithmetic mean and sample size; mean 
MAY BE negative|
+
+Four are simple `long` values, with the variations how they are likely to
+change and how they are aggregated.
+
+
+ Aggregation of Statistic Values
+
+For the different statistic category, the result of `aggregate(x, y)` is
+
+| Category | Aggregation |
+|--|-|
+| `counter`| `min(0, x) + min(0, y)` |
+| `gauge`  | `min(0, x) + min(0, y)` |
+| `minimum`| `min(x, y)` |
+| `maximum`| `max(x, y)` |
+| `meanStatistic` | calculation of the mean of `x` and `y` ) |
+
+
+ Class `MeanStatistic`
+
+## package `org.apache.hadoop.fs.statistics`
+
+This package contains the public statistics APIs intended
+for use by applications.
+
+
+
+
+
+`MeanStatistic` is a tuple of `(mean, samples)` to support aggregation.
+
+A `MeanStatistic`  with a sample of `0` is considered an empty statistic.
+
+All `MeanStatistic` instances where `sample = 0` are considered equal,
+irrespective of the `mean` value.
+
+Algorithm to calculate the mean :
+
+```python
+if x.samples = 0:
+y
+else if y.samples = 0 :
+x
+else:
+samples' = x.samples + y.samples
+mean' = (x.mean * x.samples) + (y.mean * y.samples) / samples'
+(samples', mean')
+```
+
+Implicitly, this means that if both samples are empty, then the aggregate 
value is also empty.
+
+```java
+public final class MeanStatistic implements Serializable, Cloneable {
+  /**
+   * Arithmetic mean.
+   */
+  private double mean;
+
+  /**
+   * Number of samples used to calculate
+   * the mean.
+   */
+  private long samples;
+
+  /**
+   * Get the mean value.
+   * @return the mean
+   */
+  public double getMean() {
+return mean;
+  }
+
+  /**
+   * Get the sample count.
+   * @return the sample count; 0 means empty
+   */
+  public long getSamples() {
+return samples;
+  }
+
+  /**
+   * Is a statistic empty?
+   * @return true if the sample count is 0
+   */
+  public boolean isEmpty() {
+return samples == 0;
+  }
+   /**
+   * Add another mean statistic to create a new statistic.
+   * When adding two statistics, if either is empty then
+   * a copy of the non-empty statistic is returned.
+   * If both are empty then a new empty statistic is returned.
+   *
+   * @param other other value
+   * @return the aggregate mean
+   */
+  public MeanStatistic add(final MeanStatistic other) {
+/* Implementation elided. */
+  }
+  @Override
+  public int hashCode() {
+return Objects.hash(mean, samples);
+  }
+
+  @Override
+  public boolean equals(final Object o) {
+if (this == o) { return true; }
+if (o == null || getClass() != 

[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-07 Thread GitBox


mukund-thakur commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r450904865



##
File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/iostatistics.md
##
@@ -0,0 +1,432 @@
+
+
+# Statistic collection with the IOStatistics API
+
+```java
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+```
+
+The `IOStatistics` API is intended to provide statistics on individual IO
+classes -such as input and output streams, *in a standard way which 
+applications can query*
+
+Many filesystem-related classes have implemented statistics gathering
+and provided private/unstable ways to query this, but as they were
+not common across implementations it was unsafe for applications
+to reference these values. Example: `S3AInputStream` and its statistics
+API. This is used in internal tests, but cannot be used downstream in
+applications such as Apache Hive or Apache HBase.
+
+The IOStatistics API is intended to 
+
+1. Be instance specific:, rather than shared across multiple instances
+   of a class, or thread local.
+1. Be public and stable enough to be used by applications.
+1. Be easy to use in applications written in Java, Scala, and, via libhdfs, 
C/C++
+1. Have foundational interfaces and classes in the `hadoop-common` JAR.
+
+## Core Model
+
+Any class *may* implement `IOStatisticsSource` in order to
+provide statistics.
+
+Wrapper I/O Classes such as `FSDataInputStream` anc `FSDataOutputStream` 
*should*
+implement the interface and forward it to the wrapped class, if they also
+implement it -and return `null` if they do not.
+
+`IOStatisticsSource` implementations `getIOStatistics()` return an
+instance of `IOStatistics` enumerating the statistics of that specific
+instance.
+
+The `IOStatistics` Interface exports five kinds of statistic:
+
+
+| Category | Type | Description |
+|--|--|-|
+| `counter`| `long`  | a counter which may increase in value; 
SHOULD BE >= 0 |
+| `gauge`  | `long`  | an arbitrary value which can down as 
well as up; SHOULD BE >= 0|
+| `minimum`| `long`  | an minimum value; MAY BE negative |
+| `maximum`| `long`  | a maximum value;  MAY BE negative |
+| `meanStatistic` | `MeanStatistic` | an arithmetic mean and sample size; mean 
MAY BE negative|
+
+Four are simple `long` values, with the variations how they are likely to
+change and how they are aggregated.
+
+
+ Aggregation of Statistic Values
+
+For the different statistic category, the result of `aggregate(x, y)` is
+
+| Category | Aggregation |
+|--|-|
+| `counter`| `min(0, x) + min(0, y)` |
+| `gauge`  | `min(0, x) + min(0, y)` |
+| `minimum`| `min(x, y)` |
+| `maximum`| `max(x, y)` |
+| `meanStatistic` | calculation of the mean of `x` and `y` ) |
+
+
+ Class `MeanStatistic`
+
+## package `org.apache.hadoop.fs.statistics`
+
+This package contains the public statistics APIs intended
+for use by applications.
+
+
+
+
+
+`MeanStatistic` is a tuple of `(mean, samples)` to support aggregation.
+
+A `MeanStatistic`  with a sample of `0` is considered an empty statistic.
+
+All `MeanStatistic` instances where `sample = 0` are considered equal,
+irrespective of the `mean` value.
+
+Algorithm to calculate the mean :
+
+```python
+if x.samples = 0:
+y
+else if y.samples = 0 :
+x
+else:
+samples' = x.samples + y.samples
+mean' = (x.mean * x.samples) + (y.mean * y.samples) / samples'
+(samples', mean')
+```
+
+Implicitly, this means that if both samples are empty, then the aggregate 
value is also empty.
+
+```java
+public final class MeanStatistic implements Serializable, Cloneable {
+  /**
+   * Arithmetic mean.
+   */
+  private double mean;
+
+  /**
+   * Number of samples used to calculate
+   * the mean.
+   */
+  private long samples;
+
+  /**
+   * Get the mean value.
+   * @return the mean
+   */
+  public double getMean() {
+return mean;
+  }
+
+  /**
+   * Get the sample count.
+   * @return the sample count; 0 means empty
+   */
+  public long getSamples() {
+return samples;
+  }
+
+  /**
+   * Is a statistic empty?
+   * @return true if the sample count is 0
+   */
+  public boolean isEmpty() {
+return samples == 0;
+  }
+   /**
+   * Add another mean statistic to create a new statistic.
+   * When adding two statistics, if either is empty then
+   * a copy of the non-empty statistic is returned.
+   * If both are empty then a new empty statistic is returned.
+   *
+   * @param other other value
+   * @return the aggregate mean
+   */
+  public MeanStatistic add(final MeanStatistic other) {
+/* Implementation elided. */
+  }
+  @Override
+  public int hashCode() {
+return Objects.hash(mean, samples);
+  }
+
+  @Override
+  public boolean equals(final Object o) {
+if (this == o) { return true; }
+if (o == null || getClass() != 

[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-07 Thread GitBox


mukund-thakur commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r450904423



##
File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/iostatistics.md
##
@@ -0,0 +1,432 @@
+
+
+# Statistic collection with the IOStatistics API
+
+```java
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+```
+
+The `IOStatistics` API is intended to provide statistics on individual IO
+classes -such as input and output streams, *in a standard way which 
+applications can query*
+
+Many filesystem-related classes have implemented statistics gathering
+and provided private/unstable ways to query this, but as they were
+not common across implementations it was unsafe for applications
+to reference these values. Example: `S3AInputStream` and its statistics
+API. This is used in internal tests, but cannot be used downstream in
+applications such as Apache Hive or Apache HBase.
+
+The IOStatistics API is intended to 
+
+1. Be instance specific:, rather than shared across multiple instances
+   of a class, or thread local.
+1. Be public and stable enough to be used by applications.
+1. Be easy to use in applications written in Java, Scala, and, via libhdfs, 
C/C++
+1. Have foundational interfaces and classes in the `hadoop-common` JAR.
+
+## Core Model
+
+Any class *may* implement `IOStatisticsSource` in order to
+provide statistics.
+
+Wrapper I/O Classes such as `FSDataInputStream` anc `FSDataOutputStream` 
*should*
+implement the interface and forward it to the wrapped class, if they also
+implement it -and return `null` if they do not.
+
+`IOStatisticsSource` implementations `getIOStatistics()` return an
+instance of `IOStatistics` enumerating the statistics of that specific
+instance.
+
+The `IOStatistics` Interface exports five kinds of statistic:
+
+
+| Category | Type | Description |
+|--|--|-|
+| `counter`| `long`  | a counter which may increase in value; 
SHOULD BE >= 0 |
+| `gauge`  | `long`  | an arbitrary value which can down as 
well as up; SHOULD BE >= 0|
+| `minimum`| `long`  | an minimum value; MAY BE negative |
+| `maximum`| `long`  | a maximum value;  MAY BE negative |
+| `meanStatistic` | `MeanStatistic` | an arithmetic mean and sample size; mean 
MAY BE negative|
+
+Four are simple `long` values, with the variations how they are likely to
+change and how they are aggregated.
+
+
+ Aggregation of Statistic Values
+
+For the different statistic category, the result of `aggregate(x, y)` is
+
+| Category | Aggregation |
+|--|-|
+| `counter`| `min(0, x) + min(0, y)` |

Review comment:
   Counters should be a simple addition right?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17113) Adding ReadAhead Counters in ABFS

2020-07-07 Thread Mehakmeet Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehakmeet Singh updated HADOOP-17113:
-
Component/s: fs/azure
Description: 
Adding ReadAheads Counters in ABFS to track the behavior of the ReadAhead 
feature in ABFS. This would include 2 counters:


|READ_AHEAD_REQUESTED_BYTES|number of bytes read by readAhead|
|READ_AHEAD_REMOTE_BYTES|number of bytes not used after readAhead was used|

> Adding ReadAhead Counters in ABFS
> -
>
> Key: HADOOP-17113
> URL: https://issues.apache.org/jira/browse/HADOOP-17113
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>
> Adding ReadAheads Counters in ABFS to track the behavior of the ReadAhead 
> feature in ABFS. This would include 2 counters:
> |READ_AHEAD_REQUESTED_BYTES|number of bytes read by readAhead|
> |READ_AHEAD_REMOTE_BYTES|number of bytes not used after readAhead was used|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17113) Adding ReadAhead Counters in ABFS

2020-07-07 Thread Mehakmeet Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehakmeet Singh updated HADOOP-17113:
-
Affects Version/s: 3.3.0

> Adding ReadAhead Counters in ABFS
> -
>
> Key: HADOOP-17113
> URL: https://issues.apache.org/jira/browse/HADOOP-17113
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17113) Adding ReadAhead Counters in ABFS

2020-07-07 Thread Mehakmeet Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehakmeet Singh updated HADOOP-17113:
-
Summary: Adding ReadAhead Counters in ABFS  (was: Adding ReadAhead )

> Adding ReadAhead Counters in ABFS
> -
>
> Key: HADOOP-17113
> URL: https://issues.apache.org/jira/browse/HADOOP-17113
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mehakmeet Singh
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17113) Adding ReadAhead Counters in ABFS

2020-07-07 Thread Mehakmeet Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehakmeet Singh reassigned HADOOP-17113:


Assignee: Mehakmeet Singh

> Adding ReadAhead Counters in ABFS
> -
>
> Key: HADOOP-17113
> URL: https://issues.apache.org/jira/browse/HADOOP-17113
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17113) Adding ReadAhead

2020-07-07 Thread Mehakmeet Singh (Jira)
Mehakmeet Singh created HADOOP-17113:


 Summary: Adding ReadAhead 
 Key: HADOOP-17113
 URL: https://issues.apache.org/jira/browse/HADOOP-17113
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Mehakmeet Singh






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-07-07 Thread GitBox


hadoop-yetus commented on pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#issuecomment-654735425


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  21m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  2s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  18m 59s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  8s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 22s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 37s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  9s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 15s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 45s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 42s |  hadoop-hdfs-rbf in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 21s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 28s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 30s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  18m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 34s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  16m 34s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 45s |  root: The patch generated 1 new 
+ 60 unchanged - 2 fixed = 61 total (was 62)  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 33s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-hdfs-rbf in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   3m 52s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 16s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |   8m 15s |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 191m 47s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2110/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2110 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 89c2dced6705 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f77bbc2123e |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2110/5/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2097: HADOOP-17088. Failed to load Xinclude files with relative path in cas…

2020-07-07 Thread GitBox


hadoop-yetus commented on pull request #2097:
URL: https://github.com/apache/hadoop/pull/2097#issuecomment-654730841


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  26m 18s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 26s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 32s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 22s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 58s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 10s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 31s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  20m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 44s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  17m 44s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 33s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 35s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   2m 16s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 10s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 183m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2097/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2097 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 11209ea5bad4 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f77bbc2123e |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2097/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2097/2/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2097/2/testReport/ |
   | Max. process+thread count | 1411 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2097/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the 

[GitHub] [hadoop] crossfire commented on a change in pull request #2097: HADOOP-17088. Failed to load Xinclude files with relative path in cas…

2020-07-07 Thread GitBox


crossfire commented on a change in pull request #2097:
URL: https://github.com/apache/hadoop/pull/2097#discussion_r450710197



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
##
@@ -1062,6 +1062,38 @@ public void testRelativeIncludes() throws Exception {
 new File(new File(relConfig).getParent()).delete();
   }
 
+  @Test
+  public void testRelativeIncludesWithLoadingViaUri() throws Exception {
+tearDown();
+File configFile = new File("./tmp/test-config.xml");
+File configFile2 = new File("./tmp/test-config2.xml");
+
+new File(configFile.getParent()).mkdirs();
+out = new BufferedWriter(new FileWriter(configFile2));
+startConfig();
+appendProperty("a", "b");
+endConfig();
+
+out = new BufferedWriter(new FileWriter(configFile));
+startConfig();
+// Add the relative path instead of the absolute one.
+startInclude(configFile2.getName());
+endInclude();
+appendProperty("c", "d");
+endConfig();
+
+// verify that the includes file contains all properties
+Path fileResource = new Path(configFile.toURI());
+conf.addResource(fileResource);
+assertEquals(conf.get("a"), "b");

Review comment:
   Thanks! Fixed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2113: HADOOP-17105: S3AFS - Do not attempt to resolve symlinks in globStatus

2020-07-07 Thread GitBox


mukund-thakur commented on a change in pull request #2113:
URL: https://github.com/apache/hadoop/pull/2113#discussion_r450705856



##
File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileOperationCost.java
##
@@ -574,4 +574,48 @@ public void testCreateCost() throws Throwable {
 }
 
   }
+
+  @Test
+  public void testCostOfGlobStatus() throws Throwable {
+describe("Test globStatus has expected cost");
+S3AFileSystem fs = getFileSystem();
+assume("Unguarded FS only", !fs.hasMetadataStore());
+
+Path basePath = path("testCostOfGlobStatus/nextFolder/");
+
+// create a bunch of files
+int filesToCreate = 10;
+for (int i = 0; i < filesToCreate; i++) {
+  try (FSDataOutputStream out = fs.create(basePath.suffix("/" + i))) {
+verifyOperationCount(1, 1);
+  }
+}
+
+fs.globStatus(basePath.suffix("/*"));
+// 2 head + 1 list from getFileStatus on path,
+// plus 1 list to match the glob pattern
+verifyOperationCount(2, 2);
+  }
+
+  @Test
+  public void testCostOfGlobStatusNoSymlinkResolution() throws Throwable {

Review comment:
   I got it it. 
https://github.com/apache/hadoop/blob/f77bbc2123e3b39117f42e2c9471eb83da98380e/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L292
 
   Thanks. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.

2020-07-07 Thread GitBox


mukund-thakur commented on a change in pull request #2038:
URL: https://github.com/apache/hadoop/pull/2038#discussion_r450676043



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
##
@@ -4181,79 +4181,114 @@ public LocatedFileStatus next() throws IOException {
 Path path = qualify(f);
 LOG.debug("listFiles({}, {})", path, recursive);
 try {
-  // if a status was given, that is used, otherwise
-  // call getFileStatus, which triggers an existence check
-  final S3AFileStatus fileStatus = status != null
-  ? status
-  : (S3AFileStatus) getFileStatus(path);
-  if (fileStatus.isFile()) {
+  // if a status was given and it is a file.
+  if (status != null && status.isFile()) {
 // simple case: File
 LOG.debug("Path is a file");
 return new Listing.SingleStatusRemoteIterator(
-toLocatedFileStatus(fileStatus));
-  } else {
-// directory: do a bulk operation
-String key = maybeAddTrailingSlash(pathToKey(path));
-String delimiter = recursive ? null : "/";
-LOG.debug("Requesting all entries under {} with delimiter '{}'",
-key, delimiter);
-final RemoteIterator cachedFilesIterator;
-final Set tombstones;
-boolean allowAuthoritative = allowAuthoritative(f);
-if (recursive) {
-  final PathMetadata pm = metadataStore.get(path, true);
-  // shouldn't need to check pm.isDeleted() because that will have
-  // been caught by getFileStatus above.
-  MetadataStoreListFilesIterator metadataStoreListFilesIterator =
-  new MetadataStoreListFilesIterator(metadataStore, pm,
-  allowAuthoritative);
-  tombstones = metadataStoreListFilesIterator.listTombstones();
-  // if all of the below is true
-  //  - authoritative access is allowed for this metadatastore for 
this directory,
-  //  - all the directory listings are authoritative on the client
-  //  - the caller does not force non-authoritative access
-  // return the listing without any further s3 access
-  if (!forceNonAuthoritativeMS &&
-  allowAuthoritative &&
-  metadataStoreListFilesIterator.isRecursivelyAuthoritative()) {
-S3AFileStatus[] statuses = S3Guard.iteratorToStatuses(
-metadataStoreListFilesIterator, tombstones);
-cachedFilesIterator = listing.createProvidedFileStatusIterator(
-statuses, ACCEPT_ALL, acceptor);
-return 
listing.createLocatedFileStatusIterator(cachedFilesIterator);
-  }
-  cachedFilesIterator = metadataStoreListFilesIterator;
-} else {
-  DirListingMetadata meta =
-  S3Guard.listChildrenWithTtl(metadataStore, path, ttlTimeProvider,
-  allowAuthoritative);
-  if (meta != null) {
-tombstones = meta.listTombstones();
-  } else {
-tombstones = null;
-  }
-  cachedFilesIterator = listing.createProvidedFileStatusIterator(
-  S3Guard.dirMetaToStatuses(meta), ACCEPT_ALL, acceptor);
-  if (allowAuthoritative && meta != null && meta.isAuthoritative()) {
-// metadata listing is authoritative, so return it directly
-return 
listing.createLocatedFileStatusIterator(cachedFilesIterator);
-  }
+toLocatedFileStatus(status));
+  }
+  // Assuming the path to be a directory
+  // do a bulk operation.
+  RemoteIterator listFilesAssumingDir =
+  getListFilesAssumingDir(path,
+  recursive,
+  acceptor,
+  collectTombstones,
+  forceNonAuthoritativeMS);
+  // If there are no list entries present, we
+  // fallback to file existence check as the path
+  // can be a file or empty directory.
+  if (!listFilesAssumingDir.hasNext()) {
+final S3AFileStatus fileStatus = (S3AFileStatus) getFileStatus(path);

Review comment:
   If there is an empty directory within a base directory. For example 
directory structure in test 
ITestS3AContractGetFileStatus>AbstractContractGetFileStatusTest.testListFilesEmptyDirectoryRecursive
   There won't be any files thus listing will be empty. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2037: HDFS-14984. HDFS setQuota: Error message should be added for invalid …

2020-07-07 Thread GitBox


hadoop-yetus commented on pull request #2037:
URL: https://github.com/apache/hadoop/pull/2037#issuecomment-654652151


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  25m 48s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  5s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m  1s |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m  7s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   3m 37s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 36s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-hdfs-client in trunk failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 33s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 25s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 36s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   4m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m  7s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   4m  7s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 22s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 28s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-hdfs-client in the patch failed 
with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 31s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   6m 31s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  8s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 126m  7s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 256m 23s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
   |   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.cli.TestHDFSCLI |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2037/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2037 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8c1359745327 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f77bbc2123e |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2073: HADOOP-16998 WASB : NativeAzureFsOutputStream#close() throwing java.l…

2020-07-07 Thread GitBox


hadoop-yetus commented on pull request #2073:
URL: https://github.com/apache/hadoop/pull/2073#issuecomment-654641695


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  25m 21s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 10s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 50s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 27s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 59s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 54s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 24s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   1m  5s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 31s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  93m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2073 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9fb42898486a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f77bbc2123e |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/5/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/5/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/5/testReport/ |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2073/5/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, 

[GitHub] [hadoop] fengnanli edited a comment on pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-07-07 Thread GitBox


fengnanli edited a comment on pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#issuecomment-653946194


   Thanks very much @sunchao @goiri  @Hexiaoqiao for the detailed review. I 
have addressed all comments and please give it another look.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] fengnanli commented on pull request #2110: HDFS-15447 RBF: Add top real owners metrics for delegation tokens

2020-07-07 Thread GitBox


fengnanli commented on pull request #2110:
URL: https://github.com/apache/hadoop/pull/2110#issuecomment-654625072


   I didn't know I have to click `resolve conversation` to publish the reply. 
Just resolved all of the comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org