[GitHub] [hadoop] lokeshj1703 closed pull request #1440: HDDS-2114: Rename does not preserve non-explicitly created interim directories

2019-09-16 Thread GitBox
lokeshj1703 closed pull request #1440: HDDS-2114: Rename does not preserve 
non-explicitly created interim directories
URL: https://github.com/apache/hadoop/pull/1440
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on issue #1440: HDDS-2114: Rename does not preserve non-explicitly created interim directories

2019-09-16 Thread GitBox
lokeshj1703 commented on issue #1440: HDDS-2114: Rename does not preserve 
non-explicitly created interim directories
URL: https://github.com/apache/hadoop/pull/1440#issuecomment-532086334
 
 
   @anuengineer Yes, its the same problem as you described. After rename if the 
sources's parent has no remaining children, then S3A does a mkdir for the 
parent. The PR does exactly the same for ozonefs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1457: HDDS-2132. TestKeyValueContainer is failing

2019-09-16 Thread GitBox
hadoop-yetus commented on issue #1457: HDDS-2132. TestKeyValueContainer is 
failing
URL: https://github.com/apache/hadoop/pull/1457#issuecomment-532085967
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 901 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | trunk passed |
   | 0 | spotbugs | 173 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 23 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 52 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 789 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | the patch passed |
   | -1 | findbugs | 23 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 251 | hadoop-hdds in the patch passed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3366 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1457/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1457 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 82e3dcde2e5d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4f67081 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1457/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1457/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1457/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1457/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1457/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1457/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1457/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1457/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1457/1/testReport/ |
   | Max. process+thread count | 404 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1457/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #1458: HDDS-2016. Add option to enforce gdpr in Bucket Create command.

2019-09-16 Thread GitBox
dineshchitlangia commented on issue #1458: HDDS-2016. Add option to enforce 
gdpr in Bucket Create command.
URL: https://github.com/apache/hadoop/pull/1458#issuecomment-532072577
 
 
   /label ozone
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1451: HDDS-2134. OM Metrics graphs include empty request type

2019-09-16 Thread GitBox
adoroszlai commented on issue #1451: HDDS-2134. OM Metrics graphs include empty 
request type
URL: https://github.com/apache/hadoop/pull/1451#issuecomment-532072032
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1457: HDDS-2132. TestKeyValueContainer is failing

2019-09-16 Thread GitBox
adoroszlai commented on issue #1457: HDDS-2132. TestKeyValueContainer is failing
URL: https://github.com/apache/hadoop/pull/1457#issuecomment-532071738
 
 
   @bshashikant please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia opened a new pull request #1458: HDDS-2016. Add option to enforce gdpr in Bucket Create command.

2019-09-16 Thread GitBox
dineshchitlangia opened a new pull request #1458: HDDS-2016. Add option to 
enforce gdpr in Bucket Create command.
URL: https://github.com/apache/hadoop/pull/1458
 
 
   Verified by running in a local cluster as the previous patches for this 
feature covers the tests.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1411: HDDS-2098 : Ozone shell command prints out ERROR when the log4j file …

2019-09-16 Thread GitBox
bharatviswa504 merged pull request #1411: HDDS-2098 : Ozone shell command 
prints out ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-16 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931107#comment-16931107
 ] 

Vinayakumar B commented on HADOOP-13363:


Hi All, 

Please review the PR at [https://github.com/apache/hadoop/pull/1432]   for 
HADOOP-16557

For upgrade of protobuf to 3.7.1. This is only upgrading the Jar and fixing 
compilation and tests.

In subsequent subtasks/PRs I will replace 'hadoop.maven .plugin' with 
'protobuf-maven-plugin' to resolve protoc dynamically, project-by-project.

But, For shading and relocation of protobuf usage, Changes in entire project 
needs to be done in one PR because of the inter-dependency on generated code in 
the hadoop-common module. This will be taken up at last.

 

So for now, please review [https://github.com/apache/hadoop/pull/1432]. Jenkins 
result is available.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx edited a comment on issue #1411: HDDS-2098 : Ozone shell command prints out ERROR when the log4j file …

2019-09-16 Thread GitBox
avijayanhwx edited a comment on issue #1411: HDDS-2098 : Ozone shell command 
prints out ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411#issuecomment-529639718
 
 
   > I have a question
   > During ozone tarball build, we do copy ozone-shell-log4j.properties to 
etc/hadoop (like we copy log4.properties then why do we see this error or 
something need to be fixed in copying this script?
   > 
   > 
https://github.com/apache/hadoop/blob/trunk/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching#L95
   
   Yes, while starting ozone from snapshot tar ball, it works perfectly. 
However, when Ozone is deployed through a cluster management tool like Cloudera 
Manager, the log4j properties may not be individually configurable. We may have 
to rely on a default log4.properties. In that case, printing a 
FileNotFoundException for ozone shell commands is something we can avoid. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1411: HDDS-2098 : Ozone shell command prints out ERROR when the log4j file …

2019-09-16 Thread GitBox
adoroszlai commented on issue #1411: HDDS-2098 : Ozone shell command prints out 
ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411#issuecomment-532061447
 
 
   Thanks for fixing this @avijayanhwx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx opened a new pull request #1411: HDDS-2098 : Ozone shell command prints out ERROR when the log4j file …

2019-09-16 Thread GitBox
avijayanhwx opened a new pull request #1411: HDDS-2098 : Ozone shell command 
prints out ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411
 
 
   …is not present.
   
   
   Manually tested change on cluster.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx removed a comment on issue #1411: HDDS-2098 : Ozone shell command prints out ERROR when the log4j file …

2019-09-16 Thread GitBox
avijayanhwx removed a comment on issue #1411: HDDS-2098 : Ozone shell command 
prints out ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411#issuecomment-531937463
 
 
   Closing this out since we can handle it from the cluster management tool. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai opened a new pull request #1457: HDDS-2132. TestKeyValueContainer is failing

2019-09-16 Thread GitBox
adoroszlai opened a new pull request #1457: HDDS-2132. TestKeyValueContainer is 
failing
URL: https://github.com/apache/hadoop/pull/1457
 
 
   ## What changes were proposed in this pull request?
   
   Fix unit tests recently broken by new `Preconditions.checkNotNull` in 
`KeyValueContainerUtil#parseKVContainerData` (added in 
fe8cdf0ab846df9c2f3f59d1d4875185633a27ea for 
[HDDS-2076](https://issues.apache.org/jira/browse/HDDS-2076)).
   
   https://issues.apache.org/jira/browse/HDDS-2132
   https://issues.apache.org/jira/browse/HDDS-2133
   
   ## How was this patch tested?
   
   ```
   $ mvn -am -Phdds -pl :hadoop-hdds-container-service clean test
   ...
   [INFO] Apache Hadoop HDDS . SUCCESS [  2.527 
s]
   [INFO] Apache Hadoop HDDS Config .. SUCCESS [  3.223 
s]
   [INFO] Apache Hadoop HDDS Common .. SUCCESS [01:53 
min]
   [INFO] Apache Hadoop HDDS Server Framework  SUCCESS [ 18.258 
s]
   [INFO] Apache Hadoop HDDS Container Service ... SUCCESS [01:13 
min]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1457: HDDS-2132. TestKeyValueContainer is failing

2019-09-16 Thread GitBox
adoroszlai commented on issue #1457: HDDS-2132. TestKeyValueContainer is failing
URL: https://github.com/apache/hadoop/pull/1457#issuecomment-532057233
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1369: HDDS-2020. Remove mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.

2019-09-16 Thread GitBox
hadoop-yetus commented on issue #1369: HDDS-2020. Remove mTLS from Ozone GRPC. 
Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1369#issuecomment-532044005
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 2458 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 73 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. |
   | -1 | compile | 23 | hadoop-ozone in trunk failed. |
   | -0 | checkstyle | 59 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 887 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 84 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 164 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 27 | hadoop-ozone in trunk failed. |
   | -0 | patch | 197 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | -1 | mvninstall | 33 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-ozone in the patch failed. |
   | -1 | cc | 26 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 50 | hadoop-hdds: The patch generated 62 new + 919 
unchanged - 44 fixed = 981 total (was 963) |
   | -0 | checkstyle | 51 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 702 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 84 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 169 | hadoop-hdds generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   | -1 | findbugs | 27 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 166 | hadoop-hdds in the patch failed. |
   | -1 | unit | 29 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 5747 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  Boxing/unboxing to parse a primitive 
org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.loadAllCertificates()
  At 
DefaultCertificateClient.java:org.apache.hadoop.hdds.security.x509.certificate.client.DefaultCertificateClient.loadAllCertificates()
  At DefaultCertificateClient.java:[line 141] |
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.container.keyvalue.TestKeyValueContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1369 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux a3af19ce03e3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2358e53 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/8/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/8/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/8/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1369/out/maven-branch-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/8/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/8/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/8/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/8/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/8/arti

[GitHub] [hadoop] hadoop-yetus commented on issue #1456: HDDS-2139. Update BeanUtils and Jackson Databind dependency versions.

2019-09-16 Thread GitBox
hadoop-yetus commented on issue #1456: HDDS-2139. Update BeanUtils and Jackson 
Databind dependency versions.
URL: https://github.com/apache/hadoop/pull/1456#issuecomment-532024260
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | shadedclient | 760 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 750 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 1696 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1456/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1456 |
   | Optional Tests | dupname asflicense xml |
   | uname | Linux 72c3ca204568 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2358e53 |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1456/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-09-16 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931003#comment-16931003
 ] 

Akira Ajisaka commented on HADOOP-15958:


Hi [~tangzhankun], I don't know any tools. I collected the information from the 
existing LICENSE file.
Sorry for the late response.

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-003.patch, 
> HADOOP-15958-004.patch, HADOOP-15958-wip.001.patch, HADOOP-15958.005.patch, 
> HADOOP-15958.006.patch, HADOOP-15958.007.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru opened a new pull request #1456: HDDS-2139. Update BeanUtils and Jackson Databind dependency versions.

2019-09-16 Thread GitBox
hanishakoneru opened a new pull request #1456: HDDS-2139. Update BeanUtils and 
Jackson Databind dependency versions.
URL: https://github.com/apache/hadoop/pull/1456
 
 
   The following Ozone dependencies have known security vulnerabilities. We 
should update them to newer/ latest versions.
   - Apache Common BeanUtils version 1.9.3
   - Fasterxml Jackson version 2.9.5


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1446: YARN-9834. Allow using a pool of local users to run Yarn Secure Conta…

2019-09-16 Thread GitBox
hadoop-yetus commented on issue #1446: YARN-9834. Allow using a pool of local 
users to run Yarn Secure Conta…
URL: https://github.com/apache/hadoop/pull/1446#issuecomment-532013551
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 42 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1033 | trunk passed |
   | +1 | compile | 496 | trunk passed |
   | +1 | checkstyle | 90 | trunk passed |
   | +1 | mvnsite | 103 | trunk passed |
   | +1 | shadedclient | 903 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 85 | trunk passed |
   | 0 | spotbugs | 85 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 188 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for patch |
   | +1 | mvninstall | 74 | the patch passed |
   | +1 | compile | 444 | the patch passed |
   | +1 | javac | 444 | the patch passed |
   | -0 | checkstyle | 88 | hadoop-yarn-project/hadoop-yarn: The patch 
generated 30 new + 382 unchanged - 1 fixed = 412 total (was 383) |
   | +1 | mvnsite | 97 | the patch passed |
   | -1 | whitespace | 0 | The patch has 2 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 751 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 80 | the patch passed |
   | -1 | findbugs | 91 | 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 60 | hadoop-yarn-api in the patch failed. |
   | -1 | unit | 1317 | hadoop-yarn-server-nodemanager in the patch failed. |
   | -1 | asflicense | 49 | The patch generated 1 ASF License warnings. |
   | | | 6245 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
   |  |  Possible doublecheck on 
org.apache.hadoop.yarn.server.nodemanager.SecureModeLocalUserAllocator.instance 
in 
org.apache.hadoop.yarn.server.nodemanager.SecureModeLocalUserAllocator.getInstance(Configuration)
  At 
SecureModeLocalUserAllocator.java:org.apache.hadoop.yarn.server.nodemanager.SecureModeLocalUserAllocator.getInstance(Configuration)
  At SecureModeLocalUserAllocator.java:[lines 85-87] |
   | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
   |   | hadoop.yarn.server.nodemanager.TestNodeManagerReboot |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1446/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1446 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 027e04a0ef2c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2358e53 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1446/4/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1446/4/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1446/4/artifact/out/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1446/4/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1446/4/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1446/4/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1446/4/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: hadoop-yarn-project/hadoop-yarn |
   | Console output | 
https://builds.a

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1446: YARN-9834. Allow using a pool of local users to run Yarn Secure Conta…

2019-09-16 Thread GitBox
hadoop-yetus commented on a change in pull request #1446: YARN-9834. Allow 
using a pool of local users to run Yarn Secure Conta…
URL: https://github.com/apache/hadoop/pull/1446#discussion_r324943290
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/SecureModeLocalUserAllocator.java
 ##
 @@ -0,0 +1,250 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.hadoop.yarn.server.nodemanager;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+class LocalUserInfo {
+  String localUser;
+  int localUserIndex;
+  int appCount;
+  int fileOpCount;
+  int logHandlingCount;
+  
+  public LocalUserInfo(String user, int userIndex) {
+localUser = user;
+localUserIndex = userIndex;
+appCount = 0;
+fileOpCount = 0;
+logHandlingCount = 0;
+  }
+}
+
+/**
+ * Allocate local user to an appUser from a pool of precreated local users.
+ * Maintains the appUser to local user mapping, until:
+ * a) all applications of the appUser is finished;
+ * b) all FileDeletionTask for that appUser is executed;
+ * c) all log aggregation/handling requests for appUser's applications are done
+ * For now allocation is only maintained in memory so it does not support
+ * node manager recovery mode.
+ */
+public class SecureModeLocalUserAllocator {
+  public static final String NONEXISTUSER = "nonexistuser";
+  private static final Logger LOG =
+  LoggerFactory.getLogger(SecureModeLocalUserAllocator.class);
+  private static SecureModeLocalUserAllocator instance;
+  private Map appUserToLocalUser;
+  private ArrayList allocated;
+  private int localUserCount;
+  private String localUserPrefix;
+
+  SecureModeLocalUserAllocator(Configuration conf) {
+if (conf.getBoolean(YarnConfiguration.NM_RECOVERY_ENABLED,
+YarnConfiguration.DEFAULT_NM_RECOVERY_ENABLED)) {
+  String errMsg = "Invalidate configuration combination: " +
+  YarnConfiguration.NM_RECOVERY_ENABLED + "=true, " +
+  YarnConfiguration.NM_SECURE_MODE_USE_POOL_USER + "=true";
+  throw new RuntimeException(errMsg);
+}
+localUserPrefix = conf.get(
+YarnConfiguration.NM_SECURE_MODE_POOL_USER_PREFIX,
+YarnConfiguration.DEFAULT_NM_SECURE_MODE_POOL_USER_PREFIX);
+localUserCount = conf.getInt(YarnConfiguration.NM_VCORES,
+YarnConfiguration.DEFAULT_NM_VCORES);
+allocated = new ArrayList(localUserCount);
+appUserToLocalUser = new HashMap(localUserCount);
+for (int i=0; i

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1446: YARN-9834. Allow using a pool of local users to run Yarn Secure Conta…

2019-09-16 Thread GitBox
hadoop-yetus commented on a change in pull request #1446: YARN-9834. Allow 
using a pool of local users to run Yarn Secure Conta…
URL: https://github.com/apache/hadoop/pull/1446#discussion_r324943291
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/SecureModeLocalUserAllocator.java
 ##
 @@ -0,0 +1,250 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.hadoop.yarn.server.nodemanager;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+class LocalUserInfo {
+  String localUser;
+  int localUserIndex;
+  int appCount;
+  int fileOpCount;
+  int logHandlingCount;
+  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16371) Option to disable GCM for SSL connections when running on Java 8

2019-09-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16930968#comment-16930968
 ] 

Hadoop QA commented on HADOOP-16371:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 21m 
29s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
11s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  1s{color} | {color:orange} root: The patch generated 1 new + 16 unchanged - 
0 fixed = 17 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-tools_hadoop-aws generated 4 new + 1 unchanged 
- 0 fixed = 5 total (was 1) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
5s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
27s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}146m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-970/1

[GitHub] [hadoop] hadoop-yetus commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

2019-09-16 Thread GitBox
hadoop-yetus commented on issue #970: HADOOP-16371: Option to disable GCM for 
SSL connections when running on Java 8
URL: https://github.com/apache/hadoop/pull/970#issuecomment-532001946
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 147 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 100 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1532 | trunk passed |
   | +1 | compile | 1361 | trunk passed |
   | +1 | checkstyle | 167 | trunk passed |
   | +1 | mvnsite | 205 | trunk passed |
   | -1 | shadedclient | 1289 | branch has errors when building and testing our 
client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 71 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 270 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 129 | the patch passed |
   | +1 | compile | 1130 | the patch passed |
   | +1 | javac | 1130 | the patch passed |
   | -0 | checkstyle | 181 | root: The patch generated 1 new + 16 unchanged - 0 
fixed = 17 total (was 16) |
   | +1 | mvnsite | 169 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 801 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 33 | hadoop-tools_hadoop-aws generated 4 new + 1 unchanged 
- 0 fixed = 5 total (was 1) |
   | +1 | findbugs | 266 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 545 | hadoop-common in the patch passed. |
   | +1 | unit | 87 | hadoop-aws in the patch passed. |
   | +1 | unit | 92 | hadoop-azure in the patch passed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 8761 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-970/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/970 |
   | JIRA Issue | HADOOP-16371 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 3eeeb1eb687e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 66bd168 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-970/14/artifact/out/diff-checkstyle-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-970/14/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-970/14/testReport/ |
   | Max. process+thread count | 1341 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
hadoop-tools/hadoop-azure U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-970/14/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1447: HDDS-2111. XSS fragments can be injected to the S3g landing page

2019-09-16 Thread GitBox
anuengineer closed pull request #1447: HDDS-2111. XSS fragments can be injected 
to the S3g landing page  
URL: https://github.com/apache/hadoop/pull/1447
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-16 Thread GitBox
hadoop-yetus commented on issue #1399: HADOOP-16543: Cached DNS name resolution 
error
URL: https://github.com/apache/hadoop/pull/1399#issuecomment-531984923
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 83 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1196 | trunk passed |
   | +1 | compile | 531 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 133 | trunk passed |
   | +1 | shadedclient | 982 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 118 | trunk passed |
   | 0 | spotbugs | 55 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 267 | trunk passed |
   | -0 | patch | 88 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 97 | the patch passed |
   | +1 | compile | 485 | the patch passed |
   | +1 | javac | 485 | the patch passed |
   | -0 | checkstyle | 83 | hadoop-yarn-project/hadoop-yarn: The patch 
generated 263 new + 214 unchanged - 0 fixed = 477 total (was 214) |
   | +1 | mvnsite | 122 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 808 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 113 | the patch passed |
   | +1 | findbugs | 288 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 52 | hadoop-yarn-api in the patch passed. |
   | +1 | unit | 232 | hadoop-yarn-common in the patch passed. |
   | +1 | unit | 1584 | hadoop-yarn-client in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 7338 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1399 |
   | JIRA Issue | HADOOP-16543 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 48764093e1a4 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 66bd168 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/1/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/1/testReport/ |
   | Max. process+thread count | 565 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1399/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error

2019-09-16 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16930922#comment-16930922
 ] 

Hadoop QA commented on HADOOP-16543:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
55s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
27s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} patch {color} | {color:orange}  1m 
28s{color} | {color:orange} Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 263 new + 214 unchanged - 0 fixed = 477 total (was 214) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
52s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 
24s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{col

[GitHub] [hadoop] anuengineer edited a comment on issue #1440: HDDS-2114: Rename does not preserve non-explicitly created interim directories

2019-09-16 Thread GitBox
anuengineer edited a comment on issue #1440: HDDS-2114: Rename does not 
preserve non-explicitly created interim directories
URL: https://github.com/apache/hadoop/pull/1440#issuecomment-531984790
 
 
   I am going to +1 this. Since we want to make sure Hive works.
   
   I just want to understand this more clearly. The issue is really that if we 
were a real file system, then there is nothing called an implicit path. Since 
we are an object store, there is a notion of a implicitly created file path (in 
this case the intermediary directories). I am guessing that S3AFS has the same 
problem, and either Hive has a workaround for this, or S3A is doing something 
clever. Do we know how Hive works on S3?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1440: HDDS-2114: Rename does not preserve non-explicitly created interim directories

2019-09-16 Thread GitBox
anuengineer commented on issue #1440: HDDS-2114: Rename does not preserve 
non-explicitly created interim directories
URL: https://github.com/apache/hadoop/pull/1440#issuecomment-531984790
 
 
   I am going to +1 this. Since we want to make sure Hive works.
   
   I just want to understand this more clearly. The issue is really that if we 
were a real file system, then there is nothing called an implicit path. Since 
we are an object store, there is a notion of a implicitly created file system. 
I am guess that S3AFS has the same problem, and either Hive has a workaround 
for this, or S3A is doing something really clever. Do we know how Hive works on 
S3?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1448: HDDS-2110. Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-16 Thread GitBox
anuengineer commented on issue #1448: HDDS-2110. Arbitrary file can be 
downloaded with the help of ProfilerServlet
URL: https://github.com/apache/hadoop/pull/1448#issuecomment-531980439
 
 
   Do you want to write a FindBugs Suppression rule with a pointer to 
HDDS-2110, So that people know why we are suppressing the Findbugs warning?, 
and also suppress this findbugs?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

2019-09-16 Thread GitBox
sahilTakiar commented on issue #970: HADOOP-16371: Option to disable GCM for 
SSL connections when running on Java 8
URL: https://github.com/apache/hadoop/pull/970#issuecomment-531977379
 
 
   Addressed comments, re-ran tests, the only additional failure is 
`ITestS3AFileOperationCost`, which is failing on trunk for me as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1452: HDDS-2121. Create a shaded ozone filesystem (client) jar

2019-09-16 Thread GitBox
bharatviswa504 commented on a change in pull request #1452: HDDS-2121. Create a 
shaded ozone filesystem (client) jar
URL: https://github.com/apache/hadoop/pull/1452#discussion_r324905642
 
 

 ##
 File path: hadoop-ozone/ozonefs-lib-current/pom.xml
 ##
 @@ -83,6 +63,78 @@
   true
 
   
+  
+org.apache.maven.plugins
+maven-shade-plugin
+
+  
+package
+
+  shade
+
+
+  
+
+  classworlds:classworlds
 
 Review comment:
   What is this classworlds:classworlds, is this mistakenly added from maven 
example?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

2019-09-16 Thread GitBox
sahilTakiar commented on a change in pull request #970: HADOOP-16371: Option to 
disable GCM for SSL connections when running on Java 8
URL: https://github.com/apache/hadoop/pull/970#discussion_r324882691
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/NetworkBinding.java
 ##
 @@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+
+import javax.net.ssl.HostnameVerifier;
+import javax.net.ssl.SSLSocketFactory;
+
+import com.amazonaws.ClientConfiguration;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+
+import org.slf4j.Logger;
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

2019-09-16 Thread GitBox
sahilTakiar commented on a change in pull request #970: HADOOP-16371: Option to 
disable GCM for SSL connections when running on Java 8
URL: https://github.com/apache/hadoop/pull/970#discussion_r324882650
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -114,6 +127,7 @@ protected Configuration createConfiguration() {
 S3ATestUtils.disableFilesystemCaching(conf);
 conf.setInt(READAHEAD_RANGE, READAHEAD);
 conf.set(INPUT_FADVISE, seekPolicy);
+conf.set(SSL_CHANNEL_MODE, sslChannelMode.name());
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

2019-09-16 Thread GitBox
sahilTakiar commented on a change in pull request #970: HADOOP-16371: Option to 
disable GCM for SSL connections when running on Java 8
URL: https://github.com/apache/hadoop/pull/970#discussion_r324882617
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -75,18 +83,23 @@
   @Parameterized.Parameters
   public static Collection params() {
 return Arrays.asList(new Object[][]{
-{INPUT_FADV_RANDOM},
-{INPUT_FADV_NORMAL},
-{INPUT_FADV_SEQUENTIAL},
+{INPUT_FADV_RANDOM, Default_JSSE},
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1453: HDDS-2135. OM Metric mismatch (MultipartUpload failures)

2019-09-16 Thread GitBox
bharatviswa504 commented on issue #1453: HDDS-2135. OM Metric mismatch 
(MultipartUpload failures)
URL: https://github.com/apache/hadoop/pull/1453#issuecomment-531939480
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx closed pull request #1411: HDDS-2098 : Ozone shell command prints out ERROR when the log4j file …

2019-09-16 Thread GitBox
avijayanhwx closed pull request #1411: HDDS-2098 : Ozone shell command prints 
out ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #1411: HDDS-2098 : Ozone shell command prints out ERROR when the log4j file …

2019-09-16 Thread GitBox
avijayanhwx commented on issue #1411: HDDS-2098 : Ozone shell command prints 
out ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411#issuecomment-531937463
 
 
   Closing this out since we can handle it from the cluster management tool. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-16 Thread GitBox
bharatviswa504 commented on issue #1277: HDDS-1054. List Multipart uploads in a 
bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-531935761
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-16 Thread GitBox
bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r324859826
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3StorageType.java
 ##
 @@ -52,4 +52,12 @@ public static S3StorageType getDefault() {
 return STANDARD;
   }
 
+  public static S3StorageType fromReplicationType(
+  ReplicationType replicationType) {
+if (replicationType == ReplicationType.STAND_ALONE) {
+  return S3StorageType.REDUCED_REDUNDANCY;
 
 Review comment:
   just a question, I think this is before even this patch.
   Previously we use STAND_ALONE for replication factor one, now we use RATIS 
with one and three. So, I think we need to change the code for this right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru merged pull request #1424: HDDS-2107. Datanodes should retry forever to connect to SCM in an…

2019-09-16 Thread GitBox
hanishakoneru merged pull request #1424: HDDS-2107. Datanodes should retry 
forever to connect to SCM in an…
URL: https://github.com/apache/hadoop/pull/1424
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #1424: HDDS-2107. Datanodes should retry forever to connect to SCM in an…

2019-09-16 Thread GitBox
hanishakoneru commented on issue #1424: HDDS-2107. Datanodes should retry 
forever to connect to SCM in an…
URL: https://github.com/apache/hadoop/pull/1424#issuecomment-531933488
 
 
   Thank you @vivekratnavel. +1. I will commit it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1194: HDDS-1879. Support multiple excluded scopes when choosing datanodes in NetworkTopology

2019-09-16 Thread GitBox
xiaoyuyao commented on issue #1194: HDDS-1879.  Support multiple excluded 
scopes when choosing datanodes in NetworkTopology
URL: https://github.com/apache/hadoop/pull/1194#issuecomment-531933511
 
 
   +1, I've merged the change to trunk. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao merged pull request #1194: HDDS-1879. Support multiple excluded scopes when choosing datanodes in NetworkTopology

2019-09-16 Thread GitBox
xiaoyuyao merged pull request #1194: HDDS-1879.  Support multiple excluded 
scopes when choosing datanodes in NetworkTopology
URL: https://github.com/apache/hadoop/pull/1194
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar edited a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

2019-09-16 Thread GitBox
sahilTakiar edited a comment on issue #970: HADOOP-16371: Option to disable GCM 
for SSL connections when running on Java 8
URL: https://github.com/apache/hadoop/pull/970#issuecomment-531922044
 
 
   Thanks for the feedback and running all the tests Steve! I left a comment 
above about why I think everything will still work without wildfly on the 
classpath.
   
   Working on addressing the other comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16578) ABFS: fileSystemExists() should not call container level apis

2019-09-16 Thread Da Zhou (Jira)
Da Zhou created HADOOP-16578:


 Summary: ABFS: fileSystemExists() should not call container level 
apis
 Key: HADOOP-16578
 URL: https://issues.apache.org/jira/browse/HADOOP-16578
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Da Zhou
 Fix For: 3.3.0


ABFS driver should not use container level api "Get Container Properties" as 
there is no concept of container in HDFS, and this caused some RBAC check issue.
Fix: use getFileStatus() to check if the container exists.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-16 Thread GitBox
bharatviswa504 commented on a change in pull request #1277: HDDS-1054. List 
Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#discussion_r324853402
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadList.java
 ##
 @@ -18,15 +18,21 @@
 
 package org.apache.hadoop.ozone.om.helpers;
 
-import java.util.ArrayList;
 import java.util.List;
 
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
+
 /**
- * List of in-flight MU upoads.
+ * List of in-flight MPU uploads.
  */
 public class OmMultipartUploadList {
 
-  private List uploads = new ArrayList<>();
+  private ReplicationType replicationType;
+
+  private ReplicationFactor replicationFactor;
 
 Review comment:
   Why do we need this in the multipart list result?
   These are specific to each in-progress MPU.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16577) Build fails as can't retrieve websocket-servlet

2019-09-16 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-16577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16930812#comment-16930812
 ] 

Erkin Alp Güney commented on HADOOP-16577:
--

No, stuck at the same point.

> Build fails as can't retrieve websocket-servlet
> ---
>
> Key: HADOOP-16577
> URL: https://issues.apache.org/jira/browse/HADOOP-16577
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Erkin Alp Güney
>Priority: Blocker
>  Labels: dependencies
>
> I encountered this error when building Hadoop:
> Downloading: 
> https://repository.apache.org/content/repositories/snapshots/org/eclipse/jetty/websocket/websocket-server/9.3.27.v20190418/websocket-server-9.3.27.v20190418.jar
> Sep 15, 2019 7:54:39 AM 
> org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec 
> execute
> INFO: I/O exception 
> (org.apache.maven.wagon.providers.http.httpclient.NoHttpResponseException) 
> caught when processing request to {s}->https://repository.apache.org:443: The 
> target server failed to respond
> Sep 15, 2019 7:54:39 AM 
> org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec 
> execute



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on issue #1455: HDDS-2137 : OzoneUtils to verify resourceName using HddsClientUtils

2019-09-16 Thread GitBox
virajjasani commented on issue #1455: HDDS-2137 : OzoneUtils to verify 
resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#issuecomment-531923682
 
 
   @elek @adoroszlai Could you please let me know if there is a way to build 
hadoop-ozone with hadoop-hdds client dependency?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts

2019-09-16 Thread GitBox
anuengineer closed pull request #1348: HDDS-2030. Generate simplifed reports by 
the dev-support/checks/*.sh scripts
URL: https://github.com/apache/hadoop/pull/1348
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts

2019-09-16 Thread GitBox
anuengineer commented on issue #1348: HDDS-2030. Generate simplifed reports by 
the dev-support/checks/*.sh scripts
URL: https://github.com/apache/hadoop/pull/1348#issuecomment-531923640
 
 
   +1, committed to the trunk. Thanks for the contributions, @elek and 
@adoroszlai 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #1434: HDDS-2120. Remove hadoop classes from ozonefs-current jar

2019-09-16 Thread GitBox
arp7 commented on issue #1434: HDDS-2120. Remove hadoop classes from 
ozonefs-current jar
URL: https://github.com/apache/hadoop/pull/1434#issuecomment-531922140
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

2019-09-16 Thread GitBox
sahilTakiar commented on issue #970: HADOOP-16371: Option to disable GCM for 
SSL connections when running on Java 8
URL: https://github.com/apache/hadoop/pull/970#issuecomment-531922044
 
 
   Thanks for the feedback and running all the tests Steve! I left a comment 
above about why I think the everything will still work without wildfly on the 
classpath.
   
   Working on addressing the other comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #1434: HDDS-2120. Remove hadoop classes from ozonefs-current jar

2019-09-16 Thread GitBox
avijayanhwx commented on issue #1434: HDDS-2120. Remove hadoop classes from 
ozonefs-current jar
URL: https://github.com/apache/hadoop/pull/1434#issuecomment-531921555
 
 
   LGTM +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

2019-09-16 Thread GitBox
sahilTakiar commented on a change in pull request #970: HADOOP-16371: Option to 
disable GCM for SSL connections when running on Java 8
URL: https://github.com/apache/hadoop/pull/970#discussion_r324844996
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/DelegatingSSLSocketFactory.java
 ##
 @@ -118,33 +144,47 @@ private SSLSocketFactoryEx(SSLChannelMode 
preferredChannelMode)
   private void initializeSSLContext(SSLChannelMode preferredChannelMode)
   throws NoSuchAlgorithmException, KeyManagementException {
 switch (preferredChannelMode) {
-  case Default:
-try {
-  java.util.logging.Logger logger = 
java.util.logging.Logger.getLogger(SSL.class.getName());
-  logger.setLevel(Level.WARNING);
-  ctx = SSLContext.getInstance("openssl.TLS");
-  ctx.init(null, null, null);
-  // Strong reference needs to be kept to logger until initialization 
of SSLContext finished (see HADOOP-16174):
-  logger.setLevel(Level.INFO);
-  channelMode = SSLChannelMode.OpenSSL;
-} catch (NoSuchAlgorithmException e) {
-  LOG.warn("Failed to load OpenSSL. Falling back to the JSSE 
default.");
-  ctx = SSLContext.getDefault();
-  channelMode = SSLChannelMode.Default_JSSE;
-}
-break;
-  case OpenSSL:
+case Default:
+  if (!openSSLProviderRegistered) {
+OpenSSLProvider.register();
 
 Review comment:
   The check in `NetworkBinding#bindSSLChannelMode` explicitly prevents S3A 
users from setting `fs.s3a.ssl.channel.mode` to `default` or `OpenSSL`, so 
there should be no way an S3A user can trigger the Wildfly jar from actually 
being used.
   
   IIUC Java correctly, a class should still be able to load this class without 
Wildfly on the classpath. Java only looks for the Wildly classes when a Wildly 
class is initialized (in this case `OpenSSLProvider`). The import statements 
are only used during compilation. ref: 
https://stackoverflow.com/a/12620773/11511572


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1455: HDDS-2137 : OzoneUtils to verify resourceName using HddsClientUtils

2019-09-16 Thread GitBox
hadoop-yetus commented on issue #1455: HDDS-2137 : OzoneUtils to verify 
resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#issuecomment-531915276
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1184 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 72 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 927 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 204 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 28 | hadoop-hdds: The patch generated 7 new + 38 
unchanged - 5 fixed = 45 total (was 43) |
   | -0 | checkstyle | 28 | hadoop-ozone: The patch generated 3 new + 26 
unchanged - 9 fixed = 29 total (was 35) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 723 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | -1 | findbugs | 25 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 160 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 4616 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.keyvalue.TestKeyValueContainer 
|
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1455 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux eee188862a63 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 56f042c |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-ozone/common U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1455/1/console |
   

[GitHub] [hadoop] steveloughran commented on a change in pull request #1402: HADOOP-16547. make sure that s3guard prune sets up the FS

2019-09-16 Thread GitBox
steveloughran commented on a change in pull request #1402: HADOOP-16547. make 
sure that s3guard prune sets up the FS
URL: https://github.com/apache/hadoop/pull/1402#discussion_r324816053
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
 ##
 @@ -363,6 +366,27 @@ protected void initS3AFileSystem(String path) throws 
IOException {
 filesystem = (S3AFileSystem) fs;
   }
 
+  /**
+   * Initialize the filesystem if there is none bonded to already and
+   * the command line path list is not empty.
+   * @param paths path list.
+   * @return true if at the end of the call, getFilesystem() is not null
+   * @throws IOException failure to instantiate.
+   */
+  protected boolean maybeInitFilesystem(final List paths)
+  throws IOException {
+// is there an S3 FS to create?
+if (getFilesystem() == null) {
+  // none yet -create one
+  if (!paths.isEmpty()) {
+initS3AFileSystem(paths.get(0));
 
 Review comment:
   no, I'm not worried about that


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #1452: HDDS-2121. Create a shaded ozone filesystem (client) jar

2019-09-16 Thread GitBox
arp7 commented on issue #1452: HDDS-2121. Create a shaded ozone filesystem 
(client) jar
URL: https://github.com/apache/hadoop/pull/1452#issuecomment-531893729
 
 
   The build seems to be failing in Jenkins. However I am able to compile Ozone 
locally with your patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1447: HDDS-2111. XSS fragments can be injected to the S3g landing page

2019-09-16 Thread GitBox
hadoop-yetus commented on issue #1447: HDDS-2111. XSS fragments can be injected 
to the S3g landing page  
URL: https://github.com/apache/hadoop/pull/1447#issuecomment-531891282
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 879 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-ozone in the patch failed. |
   | -1 | jshint | 83 | The patch generated 1392 new + 2737 unchanged - 0 fixed 
= 4129 total (was 2737) |
   | -1 | compile | 24 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 677 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 189 | hadoop-hdds in the patch failed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 2723 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.keyvalue.TestKeyValueContainer 
|
   |   | hadoop.ozone.container.common.TestDatanodeStateMachine |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1447 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient jshint |
   | uname | Linux e3e3320f7474 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 56f042c |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/2/artifact/out/diff-patch-jshint.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/2/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1447/2/console |
   | versions | git=2.7.4 maven=3.3.9 jshint=2.10.2 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1455: HDDS-2137 : OzoneUtils to verify resourceName using HddsClientUtils

2019-09-16 Thread GitBox
adoroszlai commented on issue #1455: HDDS-2137 : OzoneUtils to verify 
resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455#issuecomment-531883313
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani opened a new pull request #1455: HDDS-2137 : OzoneUtils to verify resourceName using HddsClientUtils

2019-09-16 Thread GitBox
virajjasani opened a new pull request #1455: HDDS-2137 : OzoneUtils to verify 
resourceName using HddsClientUtils
URL: https://github.com/apache/hadoop/pull/1455
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16565) Region must be provided when requesting session credentials or SdkClientException will be thrown

2019-09-16 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16565:

Status: Patch Available  (was: In Progress)

> Region must be provided when requesting session credentials or 
> SdkClientException will be thrown
> 
>
> Key: HADOOP-16565
> URL: https://issues.apache.org/jira/browse/HADOOP-16565
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The error found during testing in the following tests:
> {noformat}
> [ERROR]   ITestS3ATemporaryCredentials.testInvalidSTSBinding:257 ? SdkClient 
> Unable to f...
> [ERROR]   ITestS3ATemporaryCredentials.testSTS:130 ? SdkClient Unable to find 
> a region v...
> [ERROR]   
> ITestS3ATemporaryCredentials.testSessionRequestExceptionTranslation:441->lambda$testSessionRequestExceptionTranslation$5:442
>  ? SdkClient
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenExpiry:222 ? SdkClient 
> Unable to ...
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenPropagation:193 ? 
> SdkClient Unabl...
> [ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:286 ? SdkClient 
> Unable to ...
> [ERROR]   ITestSessionDelegationInFileystem.testAddTokensFromFileSystem:235 ? 
> SdkClient ...
> [ERROR]   
> ITestSessionDelegationInFileystem.testCanRetrieveTokenFromCurrentUserCreds:260->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDTCredentialProviderFromCurrentUserCreds:278->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegatedFileSystem:308->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegationBindingMismatch1:432->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testFileSystemBoundToCreator:681 
> ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testGetDTfromFileSystem:212 ? 
> SdkClient Unab...
> [ERROR]   
> ITestSessionDelegationInFileystem.testHDFSFetchDTCommand:606->lambda$testHDFSFetchDTCommand$3:607
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testYarnCredentialPickup:576 ? 
> SdkClient Una...
> [ERROR]   ITestSessionDelegationTokens.testCreateAndUseDT:176 ? SdkClient 
> Unable to find...
> [ERROR]   ITestSessionDelegationTokens.testSaveLoadTokens:121 ? SdkClient 
> Unable to find...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16565) Region must be provided when requesting session credentials or SdkClientException will be thrown

2019-09-16 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16565 started by Gabor Bota.
---
> Region must be provided when requesting session credentials or 
> SdkClientException will be thrown
> 
>
> Key: HADOOP-16565
> URL: https://issues.apache.org/jira/browse/HADOOP-16565
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The error found during testing in the following tests:
> {noformat}
> [ERROR]   ITestS3ATemporaryCredentials.testInvalidSTSBinding:257 ? SdkClient 
> Unable to f...
> [ERROR]   ITestS3ATemporaryCredentials.testSTS:130 ? SdkClient Unable to find 
> a region v...
> [ERROR]   
> ITestS3ATemporaryCredentials.testSessionRequestExceptionTranslation:441->lambda$testSessionRequestExceptionTranslation$5:442
>  ? SdkClient
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenExpiry:222 ? SdkClient 
> Unable to ...
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenPropagation:193 ? 
> SdkClient Unabl...
> [ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:286 ? SdkClient 
> Unable to ...
> [ERROR]   ITestSessionDelegationInFileystem.testAddTokensFromFileSystem:235 ? 
> SdkClient ...
> [ERROR]   
> ITestSessionDelegationInFileystem.testCanRetrieveTokenFromCurrentUserCreds:260->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDTCredentialProviderFromCurrentUserCreds:278->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegatedFileSystem:308->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegationBindingMismatch1:432->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testFileSystemBoundToCreator:681 
> ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testGetDTfromFileSystem:212 ? 
> SdkClient Unab...
> [ERROR]   
> ITestSessionDelegationInFileystem.testHDFSFetchDTCommand:606->lambda$testHDFSFetchDTCommand$3:607
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testYarnCredentialPickup:576 ? 
> SdkClient Una...
> [ERROR]   ITestSessionDelegationTokens.testCreateAndUseDT:176 ? SdkClient 
> Unable to find...
> [ERROR]   ITestSessionDelegationTokens.testSaveLoadTokens:121 ? SdkClient 
> Unable to find...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-16 Thread GitBox
hadoop-yetus commented on issue #1432: HADOOP-16557. [pb-upgrade] Upgrade 
protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#issuecomment-531877189
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 117 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 77 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1334 | trunk passed |
   | +1 | compile | 1284 | trunk passed |
   | +1 | checkstyle | 166 | trunk passed |
   | +1 | mvnsite | 444 | trunk passed |
   | +1 | shadedclient | 1400 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 376 | trunk passed |
   | 0 | spotbugs | 27 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 27 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   | 0 | findbugs | 27 | branch/hadoop-client-modules/hadoop-client-runtime no 
findbugs output file (findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 448 | the patch passed |
   | +1 | compile | 1011 | the patch passed |
   | -1 | javac | 1011 | root generated 370 new + 1466 unchanged - 0 fixed = 
1836 total (was 1466) |
   | -0 | checkstyle | 159 | root: The patch generated 1 new + 425 unchanged - 
1 fixed = 426 total (was 426) |
   | +1 | mvnsite | 440 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | -1 | shadedclient | 50 | patch has errors when building and testing our 
client artifacts. |
   | +1 | javadoc | 432 | the patch passed |
   | 0 | findbugs | 25 | hadoop-project has no data from findbugs |
   | 0 | findbugs | 26 | hadoop-client-modules/hadoop-client-runtime has no 
data from findbugs |
   ||| _ Other Tests _ |
   | +1 | unit | 25 | hadoop-project in the patch passed. |
   | +1 | unit | 589 | hadoop-common in the patch passed. |
   | +1 | unit | 131 | hadoop-hdfs-client in the patch passed. |
   | -1 | unit | 5943 | hadoop-hdfs in the patch failed. |
   | -1 | unit | 1450 | hadoop-hdfs-rbf in the patch failed. |
   | +1 | unit | 63 | hadoop-yarn-api in the patch passed. |
   | +1 | unit | 240 | hadoop-yarn-common in the patch passed. |
   | +1 | unit | 41 | hadoop-fs2img in the patch passed. |
   | +1 | unit | 31 | hadoop-client-runtime in the patch passed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 17768 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
   |   | hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
   |   | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1432 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 6389296271a0 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1e13fe6 |
   | Default Java | 1.8.0_222 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/6/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/6/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/6/testReport/ |
   | Max. process+thread count | 3104 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-rbf 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-tools/hadoop-fs2img 
hadoop-client-modules/hadoop-client-runtime U

[GitHub] [hadoop] steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r324782298
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -617,4 +624,135 @@ public void testSecurityCredentialPropagationEndToEnd() 
throws Exception {
 "override,base");
   }
 
+  @Test(timeout = 10_000L)
+  public void testS3SpecificSignerOverride() throws IOException {
+ClientConfiguration clientConfiguration = null;
+Configuration config;
+
+String signerOverride = "testSigner";
+String s3SignerOverride = "testS3Signer";
+
+// Default SIGNING_ALGORITHM, overridden for S3 only
+config = new Configuration();
+config.set(SIGNING_ALGORITHM_S3, s3SignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertEquals(s3SignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertNull(clientConfiguration.getSignerOverride());
+
+// Configured base SIGNING_ALGORITHM, overridden for S3 only
+config = new Configuration();
+config.set(SIGNING_ALGORITHM, signerOverride);
+config.set(SIGNING_ALGORITHM_S3, s3SignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertEquals(s3SignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert
+.assertEquals(signerOverride, clientConfiguration.getSignerOverride());
+  }
+
+  @Test(timeout = 10_000L)
+  public void testDdbSpecificSignerOverride() throws IOException {
+ClientConfiguration clientConfiguration = null;
+Configuration config;
+
+String signerOverride = "testSigner";
+String ddbSignerOverride = "testDdbSigner";
+
+// Default SIGNING_ALGORITHM, overridden for S3
+config = new Configuration();
+config.set(SIGNING_ALGORITHM_DDB, ddbSignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertEquals(ddbSignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertNull(clientConfiguration.getSignerOverride());
+
+// Configured base SIGNING_ALGORITHM, overridden for S3
+config = new Configuration();
+config.set(SIGNING_ALGORITHM, signerOverride);
+config.set(SIGNING_ALGORITHM_DDB, ddbSignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertEquals(ddbSignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert
+.assertEquals(signerOverride, clientConfiguration.getSignerOverride());
+  }
+
+  // Expecting generic Exception.class to handle future implementation changes.
+  // For now, this is an NPE
+  @Test(timeout = 10_000L, expected = Exception.class)
+  public void testCustomSignerFailureIfNotRegistered() {
+Signer s1 = SignerFactory.createSigner("testsigner1", null);
+  }
+
+  @Test(timeout = 10_000L)
+  public void testCustomSignerInitialization() {
+Configuration config = new Configuration();
+SignerForTest1.reset();
+SignerForTest2.reset();
+config.set(CUSTOM_SIGNERS, "testsigner1:" + 
SignerForTest1.class.getName());
+initCustomSigners(config);
+Signer s1 = SignerFactory.createSigner("testsigner1", null);
+s1.sign(null, null);
+Assert.assertEquals(true, SignerForTest1.initialized);
+  }
+
+  @Test(timeout = 10_000L)
+  public void testMultipleCustomSignerInitialization() {
+Configuration config = new Configuration();
+SignerForTest1.reset();
+SignerForTest2.reset();
+config.set(CUSTOM_SIGNERS,
+"testsigner1:" + SignerForTest1.class.getName() + "," + "testsigner2:"
++ SignerForTest2.class.getName());
+initCustomSigners(config);
+Signer s1 = SignerFactory.createSigner("testsigner1", null);
+s1.sign(null, null);
+Assert.assertEquals(true, SignerForTest1.initialized);
+
+Signer s2 = SignerFactory.createSigner("testsigner2", null);
+s2.sign(null, null);
+Assert.assertEquals(true, SignerForTest2.initialized);
 
 Review comment:
   assertTrue, with error message.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: com

[GitHub] [hadoop] steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r324781956
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -617,4 +624,135 @@ public void testSecurityCredentialPropagationEndToEnd() 
throws Exception {
 "override,base");
   }
 
+  @Test(timeout = 10_000L)
+  public void testS3SpecificSignerOverride() throws IOException {
+ClientConfiguration clientConfiguration = null;
+Configuration config;
+
+String signerOverride = "testSigner";
+String s3SignerOverride = "testS3Signer";
+
+// Default SIGNING_ALGORITHM, overridden for S3 only
+config = new Configuration();
+config.set(SIGNING_ALGORITHM_S3, s3SignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertEquals(s3SignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertNull(clientConfiguration.getSignerOverride());
+
+// Configured base SIGNING_ALGORITHM, overridden for S3 only
+config = new Configuration();
+config.set(SIGNING_ALGORITHM, signerOverride);
+config.set(SIGNING_ALGORITHM_S3, s3SignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertEquals(s3SignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert
+.assertEquals(signerOverride, clientConfiguration.getSignerOverride());
+  }
+
+  @Test(timeout = 10_000L)
+  public void testDdbSpecificSignerOverride() throws IOException {
+ClientConfiguration clientConfiguration = null;
+Configuration config;
+
+String signerOverride = "testSigner";
+String ddbSignerOverride = "testDdbSigner";
+
+// Default SIGNING_ALGORITHM, overridden for S3
+config = new Configuration();
+config.set(SIGNING_ALGORITHM_DDB, ddbSignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertEquals(ddbSignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertNull(clientConfiguration.getSignerOverride());
+
+// Configured base SIGNING_ALGORITHM, overridden for S3
+config = new Configuration();
+config.set(SIGNING_ALGORITHM, signerOverride);
+config.set(SIGNING_ALGORITHM_DDB, ddbSignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertEquals(ddbSignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert
+.assertEquals(signerOverride, clientConfiguration.getSignerOverride());
+  }
+
+  // Expecting generic Exception.class to handle future implementation changes.
+  // For now, this is an NPE
+  @Test(timeout = 10_000L, expected = Exception.class)
+  public void testCustomSignerFailureIfNotRegistered() {
+Signer s1 = SignerFactory.createSigner("testsigner1", null);
 
 Review comment:
   Prefer `LambaTestUtils.intercept` which lets you assert the text of the 
caught exception, and which includes the toString value of the result of the 
l-expression when it did not fail


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r324782221
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -617,4 +624,135 @@ public void testSecurityCredentialPropagationEndToEnd() 
throws Exception {
 "override,base");
   }
 
+  @Test(timeout = 10_000L)
+  public void testS3SpecificSignerOverride() throws IOException {
+ClientConfiguration clientConfiguration = null;
+Configuration config;
+
+String signerOverride = "testSigner";
+String s3SignerOverride = "testS3Signer";
+
+// Default SIGNING_ALGORITHM, overridden for S3 only
+config = new Configuration();
+config.set(SIGNING_ALGORITHM_S3, s3SignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertEquals(s3SignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertNull(clientConfiguration.getSignerOverride());
+
+// Configured base SIGNING_ALGORITHM, overridden for S3 only
+config = new Configuration();
+config.set(SIGNING_ALGORITHM, signerOverride);
+config.set(SIGNING_ALGORITHM_S3, s3SignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertEquals(s3SignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert
+.assertEquals(signerOverride, clientConfiguration.getSignerOverride());
+  }
+
+  @Test(timeout = 10_000L)
+  public void testDdbSpecificSignerOverride() throws IOException {
+ClientConfiguration clientConfiguration = null;
+Configuration config;
+
+String signerOverride = "testSigner";
+String ddbSignerOverride = "testDdbSigner";
+
+// Default SIGNING_ALGORITHM, overridden for S3
+config = new Configuration();
+config.set(SIGNING_ALGORITHM_DDB, ddbSignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertEquals(ddbSignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertNull(clientConfiguration.getSignerOverride());
+
+// Configured base SIGNING_ALGORITHM, overridden for S3
+config = new Configuration();
+config.set(SIGNING_ALGORITHM, signerOverride);
+config.set(SIGNING_ALGORITHM_DDB, ddbSignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertEquals(ddbSignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert
+.assertEquals(signerOverride, clientConfiguration.getSignerOverride());
+  }
+
+  // Expecting generic Exception.class to handle future implementation changes.
+  // For now, this is an NPE
+  @Test(timeout = 10_000L, expected = Exception.class)
+  public void testCustomSignerFailureIfNotRegistered() {
+Signer s1 = SignerFactory.createSigner("testsigner1", null);
+  }
+
+  @Test(timeout = 10_000L)
+  public void testCustomSignerInitialization() {
+Configuration config = new Configuration();
+SignerForTest1.reset();
+SignerForTest2.reset();
+config.set(CUSTOM_SIGNERS, "testsigner1:" + 
SignerForTest1.class.getName());
+initCustomSigners(config);
+Signer s1 = SignerFactory.createSigner("testsigner1", null);
+s1.sign(null, null);
+Assert.assertEquals(true, SignerForTest1.initialized);
 
 Review comment:
   assertTrue, with error message.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r324781226
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -617,4 +624,135 @@ public void testSecurityCredentialPropagationEndToEnd() 
throws Exception {
 "override,base");
   }
 
+  @Test(timeout = 10_000L)
+  public void testS3SpecificSignerOverride() throws IOException {
+ClientConfiguration clientConfiguration = null;
+Configuration config;
+
+String signerOverride = "testSigner";
+String s3SignerOverride = "testS3Signer";
+
+// Default SIGNING_ALGORITHM, overridden for S3 only
+config = new Configuration();
+config.set(SIGNING_ALGORITHM_S3, s3SignerOverride);
+clientConfiguration = S3AUtils.createAwsConfForS3(config, "dontcare");
+Assert.assertEquals(s3SignerOverride,
+clientConfiguration.getSignerOverride());
+clientConfiguration = S3AUtils.createAwsConfForDdb(config, "dontcare");
+Assert.assertNull(clientConfiguration.getSignerOverride());
 
 Review comment:
   Needs an error message. FWIW, when writing tests you suspect I will be 
reviewing, always add an error message. one possible exception, assertEquals 
-but even there I would prefer one.
   
   When coding, Think to yourself "Jenkins just failed and all I have is this 
stack trace of the failure -is that enough to debug what went wrong?". If not: 
log more and expand the assertion message.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r324780120
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -617,4 +624,135 @@ public void testSecurityCredentialPropagationEndToEnd() 
throws Exception {
 "override,base");
   }
 
+  @Test(timeout = 10_000L)
 
 Review comment:
   There's already a test timeout ruleeout so you don't need `timeout = 10_000`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r324779446
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -19,17 +19,24 @@
 package org.apache.hadoop.fs.s3a;
 
 import com.amazonaws.ClientConfiguration;
+import com.amazonaws.SignableRequest;
+import com.amazonaws.auth.AWSCredentials;
+import com.amazonaws.auth.Signer;
+import com.amazonaws.auth.SignerFactory;
 import com.amazonaws.services.s3.AmazonS3;
 import com.amazonaws.services.s3.S3ClientOptions;
 
+import java.io.IOException;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.commons.lang3.reflect.FieldUtils;
+import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.ContractTestUtils;
 import org.apache.hadoop.fs.s3native.S3xLoginHelper;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
 
 Review comment:
   FWIW, We are moving to assertJ when we think it makes for better assertions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r324779091
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ##
 @@ -19,17 +19,24 @@
 package org.apache.hadoop.fs.s3a;
 
 import com.amazonaws.ClientConfiguration;
+import com.amazonaws.SignableRequest;
+import com.amazonaws.auth.AWSCredentials;
+import com.amazonaws.auth.Signer;
+import com.amazonaws.auth.SignerFactory;
 import com.amazonaws.services.s3.AmazonS3;
 import com.amazonaws.services.s3.S3ClientOptions;
 
+import java.io.IOException;
 
 Review comment:
should go at the top In its own little block


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
steveloughran commented on a change in pull request #1332: HADOOP-16445. Allow 
separate custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#discussion_r324778945
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -300,6 +300,8 @@ public void initialize(URI name, Configuration 
originalConf)
 LOG.debug("Initializing S3AFileSystem for {}", bucket);
 // clone the configuration into one with propagated bucket options
 Configuration conf = propagateBucketOptions(originalConf, bucket);
+// Initialize any custom signers
+initCustomSigners(conf);
 
 Review comment:
   I guess this is the first thing which needs to be done, before anything else 
even thinks about talking to AWS services. If so, this is probably the right 
place. But I would put it after `patchSecurityCredentialProviders`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
steveloughran commented on issue #1332: HADOOP-16445. Allow separate custom 
signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#issuecomment-531861052
 
 
   we also use the credentials for talking to STS (session credentials) and I 
have an ambition to talk to Amazon SQS to subscribe to changes in an S3 bucket 
for spark streaming. As a result, I'm thinking about how to make that possible 
after your changes.
   
   I'm also slowly trying to stop `S3AUtils` getting any worse as the one stop 
"throw all our static static stuff in here.  `S3AUtils` is a merge troublespot 
and has grown too big. I'm reluctant to move existing stuff, because it makes 
backporting so hard, but I'd like to make the amount of new stuff we had there 
nearly non-existent. In particular #970 is adding a new class 
`org.apache.hadoop.fs.s3a.impl.NetworkBinding` where networking stuff can go; 
I'm not sure the best place for authentication stuff. Maybe a class alongside 
that `org.apache.hadoop.fs.s3a.impl.AwsConfigurationFactory`. What do you think?
   
   Also, rather than a new method `createAwsConfForDdb` alongside the existing 
one, I'd rather than existing one was extended to take a string declaring what 
the configurations for, e.g : "s3", "ddb", "sts" ... I'm proposing a string 
over an enum to maintain binary compatibility for applications which add/use a 
new service. If the name is unknown, we could just warn and return the 
"default" configuration.
   
   So how about adding the new operations into some into some 
`AwsConfigurationFactory.createAwsConf(string purpose,...)` method, with 
`S3AUtils.createAwsConf()` tagged as deprecated and invoking the new method. 
I'm reluctant to cut it, as I suspect some people (me) have been using it 
elsewhere.
   
   With that, I'm now going to to some minor review of the patch at the 
line-by-line level. Those comments come secondary to what I've just suggested.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
hadoop-yetus removed a comment on issue #1332: HADOOP-16445. Allow separate 
custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#issuecomment-531538318
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1124 | trunk passed |
   | +1 | compile | 34 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 38 | trunk passed |
   | +1 | shadedclient | 720 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | trunk passed |
   | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 35 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | +1 | checkstyle | 19 | hadoop-tools/hadoop-aws: The patch generated 0 new 
+ 31 unchanged - 1 fixed = 31 total (was 32) |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 749 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | the patch passed |
   | +1 | findbugs | 64 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 75 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3227 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1332 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 714a7ba8d01b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e04b8a4 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/5/testReport/ |
   | Max. process+thread count | 447 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] risdenk commented on a change in pull request #1402: HADOOP-16547. make sure that s3guard prune sets up the FS

2019-09-16 Thread GitBox
risdenk commented on a change in pull request #1402: HADOOP-16547. make sure 
that s3guard prune sets up the FS
URL: https://github.com/apache/hadoop/pull/1402#discussion_r324773281
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
 ##
 @@ -363,6 +366,27 @@ protected void initS3AFileSystem(String path) throws 
IOException {
 filesystem = (S3AFileSystem) fs;
   }
 
+  /**
+   * Initialize the filesystem if there is none bonded to already and
+   * the command line path list is not empty.
+   * @param paths path list.
+   * @return true if at the end of the call, getFilesystem() is not null
+   * @throws IOException failure to instantiate.
+   */
+  protected boolean maybeInitFilesystem(final List paths)
+  throws IOException {
+// is there an S3 FS to create?
+if (getFilesystem() == null) {
+  // none yet -create one
+  if (!paths.isEmpty()) {
+initS3AFileSystem(paths.get(0));
 
 Review comment:
   Do we need to worry about a race condition here where `maybeInitFilesystem` 
is called multiple times and `initS3AFileSystem` is called multiple times? 
Doesn't look like `initS3AFileSystem` has any protection against initializing 
multiple times.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1453: HDDS-2135. OM Metric mismatch (MultipartUpload failures)

2019-09-16 Thread GitBox
adoroszlai commented on issue #1453: HDDS-2135. OM Metric mismatch 
(MultipartUpload failures)
URL: https://github.com/apache/hadoop/pull/1453#issuecomment-531855032
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
hadoop-yetus removed a comment on issue #1332: HADOOP-16445. Allow separate 
custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#issuecomment-525172597
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 79 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1193 | trunk passed |
   | +1 | compile | 30 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 35 | trunk passed |
   | +1 | shadedclient | 794 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | trunk passed |
   | 0 | spotbugs | 58 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 56 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | +1 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 0 new 
+ 33 unchanged - 1 fixed = 33 total (was 34) |
   | +1 | mvnsite | 30 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 839 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 22 | the patch passed |
   | +1 | findbugs | 60 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 67 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3441 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1332 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 83581ae0122f 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b69ac57 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/3/testReport/ |
   | Max. process+thread count | 353 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
hadoop-yetus removed a comment on issue #1332: HADOOP-16445. Allow separate 
custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#issuecomment-527260596
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 64 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1175 | trunk passed |
   | +1 | compile | 38 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 40 | trunk passed |
   | +1 | shadedclient | 730 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | trunk passed |
   | 0 | spotbugs | 60 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 36 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | +1 | checkstyle | 23 | hadoop-tools/hadoop-aws: The patch generated 0 new 
+ 33 unchanged - 1 fixed = 33 total (was 34) |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 842 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | the patch passed |
   | +1 | findbugs | 65 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 80 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3415 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1332 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 29262084f437 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/4/testReport/ |
   | Max. process+thread count | 415 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
hadoop-yetus removed a comment on issue #1332: HADOOP-16445. Allow separate 
custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#issuecomment-523860475
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 90 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1255 | trunk passed |
   | +1 | compile | 35 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 39 | trunk passed |
   | +1 | shadedclient | 795 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | trunk passed |
   | 0 | spotbugs | 58 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 56 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | +1 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 0 new 
+ 33 unchanged - 1 fixed = 33 total (was 34) |
   | +1 | mvnsite | 34 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 864 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | the patch passed |
   | +1 | findbugs | 71 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 89 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3590 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1332 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 306f19c4b5ff 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ee7c261 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/2/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1332: HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB

2019-09-16 Thread GitBox
hadoop-yetus removed a comment on issue #1332: HADOOP-16445. Allow separate 
custom signing algorithms for S3 and DDB
URL: https://github.com/apache/hadoop/pull/1332#issuecomment-523637550
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 97 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1454 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 37 | trunk passed |
   | +1 | shadedclient | 820 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 31 | trunk passed |
   | 0 | spotbugs | 71 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 70 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 45 | the patch passed |
   | +1 | compile | 36 | the patch passed |
   | +1 | javac | 36 | the patch passed |
   | +1 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 0 new 
+ 33 unchanged - 1 fixed = 33 total (was 34) |
   | +1 | mvnsite | 36 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 851 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | the patch passed |
   | +1 | findbugs | 71 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 86 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 3865 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1332 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0fb3ea60aedb 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2ae7f44 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/1/testReport/ |
   | Max. process+thread count | 434 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1332/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1341: HDDS-2022. Add additional freon tests

2019-09-16 Thread GitBox
elek commented on issue #1341: HDDS-2022. Add additional freon tests
URL: https://github.com/apache/hadoop/pull/1341#issuecomment-531825709
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1194: HDDS-1879. Support multiple excluded scopes when choosing datanodes in NetworkTopology

2019-09-16 Thread GitBox
elek commented on issue #1194: HDDS-1879.  Support multiple excluded scopes 
when choosing datanodes in NetworkTopology
URL: https://github.com/apache/hadoop/pull/1194#issuecomment-531817247
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1447: HDDS-2111. XSS fragments can be injected to the S3g landing page

2019-09-16 Thread GitBox
elek commented on issue #1447: HDDS-2111. XSS fragments can be injected to the 
S3g landing page  
URL: https://github.com/apache/hadoop/pull/1447#issuecomment-531816534
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1448: HDDS-2110. Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-16 Thread GitBox
elek commented on issue #1448: HDDS-2110. Arbitrary file can be downloaded with 
the help of ProfilerServlet
URL: https://github.com/apache/hadoop/pull/1448#issuecomment-531816573
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1451: HDDS-2134. OM Metrics graphs include empty request type

2019-09-16 Thread GitBox
elek commented on issue #1451: HDDS-2134. OM Metrics graphs include empty 
request type
URL: https://github.com/apache/hadoop/pull/1451#issuecomment-531816625
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1440: HDDS-2114: Rename does not preserve non-explicitly created interim directories

2019-09-16 Thread GitBox
elek commented on issue #1440: HDDS-2114: Rename does not preserve 
non-explicitly created interim directories
URL: https://github.com/apache/hadoop/pull/1440#issuecomment-531816460
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1434: HDDS-2120. Remove hadoop classes from ozonefs-current jar

2019-09-16 Thread GitBox
elek commented on issue #1434: HDDS-2120. Remove hadoop classes from 
ozonefs-current jar
URL: https://github.com/apache/hadoop/pull/1434#issuecomment-531816289
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek edited a comment on issue #1434: HDDS-2120. Remove hadoop classes from ozonefs-current jar

2019-09-16 Thread GitBox
elek edited a comment on issue #1434: HDDS-2120. Remove hadoop classes from 
ozonefs-current jar
URL: https://github.com/apache/hadoop/pull/1434#issuecomment-531778910
 
 
   > Should we also exclude Ratis.. since HBase potentially has a dependency on 
Ratis and we should use the in-build version instead.
   
   We need Ratis on the classpath. I am not sure what is the best approach for 
support HBase. If we exclude it, we need to put two jar files to the classpath 
of hive/spark: ozonefs and ratis.
   
   Do you know which HBase version uses Ratis?
   
   > What about third party dependencies? Downstream components could run into 
conflicts.. I believe that will require shading
   
   Yes, they can be shaded in HDDS-2121 (at least half of them. Some of them 
should be kept in original form such as protobuf or the logging).
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #1444: HDDS-2078. Get/Renew DelegationToken NPE after HDDS-1909. Contributed…

2019-09-16 Thread GitBox
elek closed pull request #1444: HDDS-2078. Get/Renew DelegationToken NPE after 
HDDS-1909. Contributed…
URL: https://github.com/apache/hadoop/pull/1444
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1454: HADOOP-16565. Region must be provided when requesting session credentials or SdkClientException will be thrown

2019-09-16 Thread GitBox
steveloughran commented on issue #1454: HADOOP-16565. Region must be provided 
when requesting session credentials or SdkClientException will be thrown
URL: https://github.com/apache/hadoop/pull/1454#issuecomment-531814527
 
 
   LGTM; +1 pending successful run. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-16 Thread GitBox
elek commented on issue #1277: HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-531813760
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1277: HDDS-1054. List Multipart uploads in a bucket

2019-09-16 Thread GitBox
elek commented on issue #1277: HDDS-1054. List Multipart uploads in a bucket
URL: https://github.com/apache/hadoop/pull/1277#issuecomment-531811701
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-16 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16930627#comment-16930627
 ] 

Steve Loughran commented on HADOOP-16547:
-

I've also verified that the test failure of HADOOP-16576 goes away with this 
patch. I'm happy with it!

> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16576) ITestS3GuardDDBRootOperations. test_100_FilesystemPrune failure

2019-09-16 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16930625#comment-16930625
 ] 

Steve Loughran commented on HADOOP-16576:
-

With the prune patch, this test passes: [INFO] 
---
[INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations
[WARNING] Tests run: 7, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 12.12 
s - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations

> ITestS3GuardDDBRootOperations. test_100_FilesystemPrune failure
> ---
>
> Key: HADOOP-16576
> URL: https://issues.apache.org/jira/browse/HADOOP-16576
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Failure in  
> test_100_FilesystemPrune(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations)
>   : No DynamoDB table name configuredjj 
> fs.s3a.s3guard.ddb.region =eu-west-1; no region is defined.
> This is surfacing on a branch which doesn't have my pending prune init code, 
> so even though an FS was passed in, it wasn't used for init. Assumption: this 
> will go away forever when that patch is in



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16576) ITestS3GuardDDBRootOperations. test_100_FilesystemPrune failure

2019-09-16 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16930624#comment-16930624
 ] 

Steve Loughran commented on HADOOP-16576:
-

Given that the test fails when there's no global region/ddb table, it should be 
possible to create a real integration test for this. Complex though as I'd have 
to read the values, set them on the bucket config and then unset the global 
ones -I'd inevitably end up with something which would only work on my config

> ITestS3GuardDDBRootOperations. test_100_FilesystemPrune failure
> ---
>
> Key: HADOOP-16576
> URL: https://issues.apache.org/jira/browse/HADOOP-16576
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Failure in  
> test_100_FilesystemPrune(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations)
>   : No DynamoDB table name configuredjj 
> fs.s3a.s3guard.ddb.region =eu-west-1; no region is defined.
> This is surfacing on a branch which doesn't have my pending prune init code, 
> so even though an FS was passed in, it wasn't used for init. Assumption: this 
> will go away forever when that patch is in



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16576) ITestS3GuardDDBRootOperations. test_100_FilesystemPrune failure

2019-09-16 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16930625#comment-16930625
 ] 

Steve Loughran edited comment on HADOOP-16576 at 9/16/19 2:47 PM:
--

With the prune patch, this test passes: 

[INFO] ---
[INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations
[WARNING] Tests run: 7, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 12.12 
s - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations


was (Author: ste...@apache.org):
With the prune patch, this test passes: [INFO] 
---
[INFO] Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations
[WARNING] Tests run: 7, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 12.12 
s - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations

> ITestS3GuardDDBRootOperations. test_100_FilesystemPrune failure
> ---
>
> Key: HADOOP-16576
> URL: https://issues.apache.org/jira/browse/HADOOP-16576
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Failure in  
> test_100_FilesystemPrune(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations)
>   : No DynamoDB table name configuredjj 
> fs.s3a.s3guard.ddb.region =eu-west-1; no region is defined.
> This is surfacing on a branch which doesn't have my pending prune init code, 
> so even though an FS was passed in, it wasn't used for init. Assumption: this 
> will go away forever when that patch is in



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1454: HADOOP-16565. Region must be provided when requesting session credentials or SdkClientException will be thrown

2019-09-16 Thread GitBox
hadoop-yetus commented on issue #1454: HADOOP-16565. Region must be provided 
when requesting session credentials or SdkClientException will be thrown
URL: https://github.com/apache/hadoop/pull/1454#issuecomment-531809975
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1193 | trunk passed |
   | +1 | compile | 38 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 42 | trunk passed |
   | +1 | shadedclient | 859 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 31 | trunk passed |
   | 0 | spotbugs | 65 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 61 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 38 | the patch passed |
   | +1 | compile | 29 | the patch passed |
   | +1 | javac | 29 | the patch passed |
   | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 35 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 896 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | the patch passed |
   | +1 | findbugs | 68 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 82 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3625 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1454/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1454 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5fa659233982 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 363373e |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1454/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1454/1/testReport/ |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1454/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16576) ITestS3GuardDDBRootOperations. test_100_FilesystemPrune failure

2019-09-16 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16576.
-
  Assignee: Steve Loughran
Resolution: Duplicate

> ITestS3GuardDDBRootOperations. test_100_FilesystemPrune failure
> ---
>
> Key: HADOOP-16576
> URL: https://issues.apache.org/jira/browse/HADOOP-16576
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Failure in  
> test_100_FilesystemPrune(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations)
>   : No DynamoDB table name configuredjj 
> fs.s3a.s3guard.ddb.region =eu-west-1; no region is defined.
> This is surfacing on a branch which doesn't have my pending prune init code, 
> so even though an FS was passed in, it wasn't used for init. Assumption: this 
> will go away forever when that patch is in



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16576) ITestS3GuardDDBRootOperations. test_100_FilesystemPrune failure

2019-09-16 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16930617#comment-16930617
 ] 

Steve Loughran commented on HADOOP-16576:
-

See also'
{code}
[ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 104.79 
s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations
[ERROR] 
test_100_FilesystemPrune(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations)
  Time elapsed: 0.6 s  <<< ERROR!
java.lang.IllegalArgumentException: No DynamoDB table name configured
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:141)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:497)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initMetadataStore(S3GuardTool.java:317)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Prune.run(S3GuardTool.java:1071)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:401)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1672)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations.test_100_FilesystemPrune(ITestS3GuardDDBRootOperations.java:154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}


> ITestS3GuardDDBRootOperations. test_100_FilesystemPrune failure
> ---
>
> Key: HADOOP-16576
> URL: https://issues.apache.org/jira/browse/HADOOP-16576
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> Failure in  
> test_100_FilesystemPrune(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations)
>   : No DynamoDB table name configuredjj 
> fs.s3a.s3guard.ddb.region =eu-west-1; no region is defined.
> This is surfacing on a branch which doesn't have my pending prune init code, 
> so even though an FS was passed in, it wasn't used for init. Assumption: this 
> will go away forever when that patch is in



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran edited a comment on issue #970: HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

2019-09-16 Thread GitBox
steveloughran edited a comment on issue #970: HADOOP-16371: Option to disable 
GCM for SSL connections when running on Java 8
URL: https://github.com/apache/hadoop/pull/970#issuecomment-531754074
 
 
   Patch is coming together nicely -nearly there. Done CLI tests as well as the 
-aws suite,
   
   
   A big fear of mine is that the current patch will, through transitive 
references, fail if the
   wildfly JAR isn't on the CP.
   
   But I couldn't actually create that failure condition when I tried on the 
CLI. 
   
   first, extended patched cloudstore s3a diagnostics to look for the new class
   
   ```
   class: org.wildfly.openssl.OpenSSLProvider
  Not found on classpath: org.wildfly.openssl.OpenSSLProvider
   ```
   
   tested IO against a store -all good.
   
   and when I switch to an unsupported mode I get the expected stack trace
   ```
   2019-09-16 13:06:11,124 [main] INFO  diag.StoreDiag 
(DurationInfo.java:(53)) - Starting: Creating filesystem 
s3a://hwdev-steve-ireland-new/
   2019-09-16 13:06:11,683 [main] INFO  diag.StoreDiag 
(DurationInfo.java:close(100)) - Creating filesystem 
s3a://hwdev-steve-ireland-new/: duration 0:00:561
   java.lang.UnsupportedOperationException: S3A does not support setting 
fs.s3a.ssl.channel.mode OpenSSL or Default
at 
org.apache.hadoop.fs.s3a.impl.NetworkBinding.bindSSLChannelMode(NetworkBinding.java:86)
at 
org.apache.hadoop.fs.s3a.S3AUtils.initProtocolSettings(S3AUtils.java:1266)
at 
org.apache.hadoop.fs.s3a.S3AUtils.initConnectionSettings(S3AUtils.java:1230)
at org.apache.hadoop.fs.s3a.S3AUtils.createAwsConf(S3AUtils.java:1211)
at 
org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:58)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:543)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:364)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3370)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3419)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3387)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:502)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at 
org.apache.hadoop.fs.store.diag.StoreDiag.executeFileSystemOperations(StoreDiag.java:860)
at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:409)
at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:353)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.store.diag.StoreDiag.exec(StoreDiag.java:1163)
at org.apache.hadoop.fs.store.diag.StoreDiag.main(StoreDiag.java:1172)
at storediag.main(storediag.java:25)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
   2019-09-16 13:06:11,685 [main] INFO  util.ExitUtil (ExitUtil.java:t
   ```
   
   which is telling me that my fears are misguided?
   
   what do others say?
   
   BTW, @bgaborg  been having problems with STS tests too -try setting a region 
for the endpoint. Starting to suspect the latest SDK needs this now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >