[jira] [Assigned] (HADOOP-16447) Upgrade JUnit5 from 5.3.1 to 5.5+ to support global timeout

2019-07-22 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HADOOP-16447:
-

Assignee: kevin su  (was: Akira Ajisaka)

> Upgrade JUnit5 from 5.3.1 to 5.5+ to support global timeout
> ---
>
> Key: HADOOP-16447
> URL: https://issues.apache.org/jira/browse/HADOOP-16447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: kevin su
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16447) Upgrade JUnit5 from 5.3.1 to 5.5+ to support global timeout

2019-07-22 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-16447:
--

Assignee: Akira Ajisaka

> Upgrade JUnit5 from 5.3.1 to 5.5+ to support global timeout
> ---
>
> Key: HADOOP-16447
> URL: https://issues.apache.org/jira/browse/HADOOP-16447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14693) Upgrade JUnit from 4 to 5

2019-07-22 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890689#comment-16890689
 ] 

Akira Ajisaka commented on HADOOP-14693:


Global timeout is supported in JUnit 5.5.0. Filed HADOOP-16447 to upgrade 
JUnit5.

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16447) Upgrade JUnit5 from 5.3.1 to 5.5+ to support global timeout

2019-07-22 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16447:
--

 Summary: Upgrade JUnit5 from 5.3.1 to 5.5+ to support global 
timeout
 Key: HADOOP-16447
 URL: https://issues.apache.org/jira/browse/HADOOP-16447
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Reporter: Akira Ajisaka






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pingsutw commented on issue #1089: MAPREDUCE-7076 TestNNBench#testNNBenchCreateReadAndDelete failing in …

2019-07-22 Thread GitBox
pingsutw commented on issue #1089: MAPREDUCE-7076 
TestNNBench#testNNBenchCreateReadAndDelete failing in …
URL: https://github.com/apache/hadoop/pull/1089#issuecomment-514054403
 
 
   @aajisaka my JIRA iID is pingsutw as well, and thanks for your help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1141: HDDS-1845. OMVolumeSetQuota|OwnerRequest#validateAndUpdateCache return response.

2019-07-22 Thread GitBox
hadoop-yetus commented on issue #1141: HDDS-1845. 
OMVolumeSetQuota|OwnerRequest#validateAndUpdateCache return response.
URL: https://github.com/apache/hadoop/pull/1141#issuecomment-514040476
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 85 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 632 | trunk passed |
   | +1 | compile | 382 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 986 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | trunk passed |
   | 0 | spotbugs | 450 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 655 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 584 | the patch passed |
   | +1 | compile | 376 | the patch passed |
   | +1 | javac | 376 | the patch passed |
   | -0 | checkstyle | 50 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 762 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | the patch passed |
   | +1 | findbugs | 689 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 342 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2056 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 60 | The patch does not generate ASF License warnings. |
   | | | 8306 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1141/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1141 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux efb421ab793d 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c533b79 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1141/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1141/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1141/1/testReport/ |
   | Max. process+thread count | 4103 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1141/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] wuyinxian124 opened a new pull request #1142: YARN-9692. ContainerAllocationExpirer is missspelled

2019-07-22 Thread GitBox
wuyinxian124 opened a new pull request #1142: YARN-9692. 
ContainerAllocationExpirer is missspelled
URL: https://github.com/apache/hadoop/pull/1142
 
 
   The old fully name is 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer.
   I changed to  
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpired.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] guyuqi commented on issue #224: HADOOP-9320 Fix Hadoop native build failure on ARM hard-float

2019-07-22 Thread GitBox
guyuqi commented on issue #224: HADOOP-9320 Fix Hadoop native build failure on 
ARM hard-float
URL: https://github.com/apache/hadoop/pull/224#issuecomment-514033908
 
 
   @amuttsch 
   Could you please elaborate how to reproduce the native build failure?
   We would like to verify the patch on our Arm64 platform.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1140: HDDS-1819. Implement S3 Commit MPU request to use Cache and DoubleBuffer.

2019-07-22 Thread GitBox
hadoop-yetus commented on issue #1140: HDDS-1819. Implement S3 Commit MPU 
request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1140#issuecomment-514033474
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 620 | trunk passed |
   | +1 | compile | 361 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 793 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | trunk passed |
   | 0 | spotbugs | 410 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 602 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 545 | the patch passed |
   | +1 | compile | 364 | the patch passed |
   | +1 | javac | 364 | the patch passed |
   | +1 | checkstyle | 72 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 647 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 694 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 295 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1679 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 7292 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1140/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1140 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9aed8d5664cf 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c533b79 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1140/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1140/1/testReport/ |
   | Max. process+thread count | 3883 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1140/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1139: HDDS-1846. Default value for checksum bytes is different in ozone-site.xml and code.

2019-07-22 Thread GitBox
hadoop-yetus commented on issue #1139: HDDS-1846. Default value for checksum 
bytes is different in ozone-site.xml and code.
URL: https://github.com/apache/hadoop/pull/1139#issuecomment-514030318
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 616 | trunk passed |
   | +1 | compile | 365 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 846 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | trunk passed |
   | 0 | spotbugs | 416 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 614 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 526 | the patch passed |
   | +1 | compile | 348 | the patch passed |
   | +1 | javac | 348 | the patch passed |
   | +1 | checkstyle | 69 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 628 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | the patch passed |
   | +1 | findbugs | 615 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 292 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2149 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 7665 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestStorageContainerManager |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1139/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1139 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6a4ed0a81fce 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c533b79 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1139/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1139/1/testReport/ |
   | Max. process+thread count | 4346 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1139/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16441) if use -Dbundle.openssl=true, bundled with unnecessary libk5crypto.*

2019-07-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890608#comment-16890608
 ] 

Hadoop QA commented on HADOOP-16441:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
18s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16441 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12975447/HADOOP-16441.002.patch
 |
| Optional Tests |  dupname  asflicense  shellcheck  shelldocs  |
| uname | Linux 04a7cbc02e11 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ee87e9a |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| Max. process+thread count | 308 (vs. ulimit of 5500) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16407/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



>  if use -Dbundle.openssl=true, bundled with unnecessary libk5crypto.*
> -
>
> Key: HADOOP-16441
> URL: https://issues.apache.org/jira/browse/HADOOP-16441
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Major
> Attachments: HADOOP-16441.001.patch, HADOOP-16441.002.patch
>
>
> If use -Dbundle.openssl to copy the contents of the openssl.lib directory 
> into the final tar file,  
> finally bundled with unnecessary libk5crypto.*
> build log
> {noformat}
> + bundle_native_lib true openssl.lib crypto /usr/lib64
> + declare bundleoption=true
> + declare liboption=openssl.lib
> + declare libpattern=crypto
> + declare libdir=/usr/lib64
> + echo 'Checking to bundle with:'
> + echo 'bundleoption=true, liboption=openssl.lib, pattern=crypto 
> libdir=/usr/lib64'
> + [[ true != \t\r\u\e ]]
> + [[ -z /usr/lib64 ]]
> + [[ ! -d /usr/lib64 ]]
> + cd /usr/lib64
> + cd 
> /home/magnum/hadoop-hdp/hadoop-common-project/hadoop-common/target/hadoop-common-3.1.1.3.1.2.3.1.0.0-78/lib/native/
> + tar xfBp -
> + tar cf - ./libcrypto.so ./libcrypto.so.10 ./libcrypto.so.1.0.2k 
> ./libk5crypto.so ./libk5crypto.so.3 ./libk5crypto.so.3.1
> {noformat}
>  
> bundled native library list
> {noformat}
> [magnum@0dabe9f5564d hadoop-hdp]$ ls -al 
> hadoop-dist/target/hadoop-3.1.1.3.1.2.3.1.0.0-78/lib/native/
> total 22704
> drwxrwxr-x 3 magnum magnum4096 Jul 22 04:22 .
> drwxrwxr-x 3 magnum magnum  20 Jul 22 04:30 ..
> drwxrwxr-x 2 magnum magnum  94 Jul 22 04:22 examples
> lrwxrwxrwx 1 magnum magnum  19 Jul  9 03:20 libcrypto.so -> 
> libcrypto.so.1.0.2k
> lrwxrwxrwx 1 magnum magnum  19 Jul  9 03:20 libcrypto.so.10 -> 
> libcrypto.so.1.0.2k
> -rwxr-xr-x 1 magnum magnum 2516624 Mar 12 10:12 libcrypto.so.1.0.2k
> -rw-rw-r-- 1 magnum magnum 1820202 Jul 22 04:13 libhadoop.a
> -rw-rw-r-- 1 magnum magnum 1607168 Jul 22 04:22 libhadooppipes.a
> 

[GitHub] [hadoop] aajisaka commented on issue #1089: MAPREDUCE-7076 TestNNBench#testNNBenchCreateReadAndDelete failing in …

2019-07-22 Thread GitBox
aajisaka commented on issue #1089: MAPREDUCE-7076 
TestNNBench#testNNBenchCreateReadAndDelete failing in …
URL: https://github.com/apache/hadoop/pull/1089#issuecomment-514018551
 
 
   Hi @pingsutw , do you have an ASF JIRA ID? I'd like to assign you to 
https://issues.apache.org/jira/browse/MAPREDUCE-7076


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #1089: MAPREDUCE-7076 TestNNBench#testNNBenchCreateReadAndDelete failing in …

2019-07-22 Thread GitBox
aajisaka commented on issue #1089: MAPREDUCE-7076 
TestNNBench#testNNBenchCreateReadAndDelete failing in …
URL: https://github.com/apache/hadoop/pull/1089#issuecomment-514018213
 
 
   Thanks @pingsutw and @jojochuang !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka closed pull request #1089: MAPREDUCE-7076 TestNNBench#testNNBenchCreateReadAndDelete failing in …

2019-07-22 Thread GitBox
aajisaka closed pull request #1089: MAPREDUCE-7076 
TestNNBench#testNNBenchCreateReadAndDelete failing in …
URL: https://github.com/apache/hadoop/pull/1089
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #1089: MAPREDUCE-7076 TestNNBench#testNNBenchCreateReadAndDelete failing in …

2019-07-22 Thread GitBox
aajisaka commented on issue #1089: MAPREDUCE-7076 
TestNNBench#testNNBenchCreateReadAndDelete failing in …
URL: https://github.com/apache/hadoop/pull/1089#issuecomment-514015838
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16441) if use -Dbundle.openssl=true, bundled with unnecessary libk5crypto.*

2019-07-22 Thread KWON BYUNGCHANG (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890575#comment-16890575
 ] 

KWON BYUNGCHANG commented on HADOOP-16441:
--

[~iwasakims] Thank you for your feedback.

{{set -x}}  is for debugging. I attached patch removed debugging code. 

I agree with your opinion.

>  if use -Dbundle.openssl=true, bundled with unnecessary libk5crypto.*
> -
>
> Key: HADOOP-16441
> URL: https://issues.apache.org/jira/browse/HADOOP-16441
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Major
> Attachments: HADOOP-16441.001.patch, HADOOP-16441.002.patch
>
>
> If use -Dbundle.openssl to copy the contents of the openssl.lib directory 
> into the final tar file,  
> finally bundled with unnecessary libk5crypto.*
> build log
> {noformat}
> + bundle_native_lib true openssl.lib crypto /usr/lib64
> + declare bundleoption=true
> + declare liboption=openssl.lib
> + declare libpattern=crypto
> + declare libdir=/usr/lib64
> + echo 'Checking to bundle with:'
> + echo 'bundleoption=true, liboption=openssl.lib, pattern=crypto 
> libdir=/usr/lib64'
> + [[ true != \t\r\u\e ]]
> + [[ -z /usr/lib64 ]]
> + [[ ! -d /usr/lib64 ]]
> + cd /usr/lib64
> + cd 
> /home/magnum/hadoop-hdp/hadoop-common-project/hadoop-common/target/hadoop-common-3.1.1.3.1.2.3.1.0.0-78/lib/native/
> + tar xfBp -
> + tar cf - ./libcrypto.so ./libcrypto.so.10 ./libcrypto.so.1.0.2k 
> ./libk5crypto.so ./libk5crypto.so.3 ./libk5crypto.so.3.1
> {noformat}
>  
> bundled native library list
> {noformat}
> [magnum@0dabe9f5564d hadoop-hdp]$ ls -al 
> hadoop-dist/target/hadoop-3.1.1.3.1.2.3.1.0.0-78/lib/native/
> total 22704
> drwxrwxr-x 3 magnum magnum4096 Jul 22 04:22 .
> drwxrwxr-x 3 magnum magnum  20 Jul 22 04:30 ..
> drwxrwxr-x 2 magnum magnum  94 Jul 22 04:22 examples
> lrwxrwxrwx 1 magnum magnum  19 Jul  9 03:20 libcrypto.so -> 
> libcrypto.so.1.0.2k
> lrwxrwxrwx 1 magnum magnum  19 Jul  9 03:20 libcrypto.so.10 -> 
> libcrypto.so.1.0.2k
> -rwxr-xr-x 1 magnum magnum 2516624 Mar 12 10:12 libcrypto.so.1.0.2k
> -rw-rw-r-- 1 magnum magnum 1820202 Jul 22 04:13 libhadoop.a
> -rw-rw-r-- 1 magnum magnum 1607168 Jul 22 04:22 libhadooppipes.a
> lrwxrwxrwx 1 magnum magnum  18 Jul 22 04:13 libhadoop.so -> 
> libhadoop.so.1.0.0
> -rwxrwxr-x 1 magnum magnum 1026006 Jul 22 04:13 libhadoop.so.1.0.0
> -rw-rw-r-- 1 magnum magnum  475720 Jul 22 04:22 libhadooputils.a
> -rw-rw-r-- 1 magnum magnum  458600 Jul 22 04:16 libhdfs.a
> lrwxrwxrwx 1 magnum magnum  16 Jul 22 04:16 libhdfs.so -> libhdfs.so.0.0.0
> -rwxrwxr-x 1 magnum magnum  286052 Jul 22 04:16 libhdfs.so.0.0.0
> -rw-r--r-- 1 magnum magnum 1393974 Jul  9 04:47 libisal.a
> -rwxr-xr-x 1 magnum magnum 915 Jul  9 04:47 libisal.la
> lrwxrwxrwx 1 magnum magnum  17 Jul  9 04:47 libisal.so -> 
> libisal.so.2.0.27
> lrwxrwxrwx 1 magnum magnum  17 Jul  9 04:47 libisal.so.2 -> 
> libisal.so.2.0.27
> -rwxr-xr-x 1 magnum magnum  767778 Jul  9 04:47 libisal.so.2.0.27
> ==
> lrwxrwxrwx 1 magnum magnum  18 Aug  3  2018 libk5crypto.so -> 
> libk5crypto.so.3.1
> lrwxrwxrwx 1 magnum magnum  18 Aug  3  2018 libk5crypto.so.3 -> 
> libk5crypto.so.3.1
> -rwxr-xr-x 1 magnum magnum  210840 May  9  2018 libk5crypto.so.3.1
> ==
> -rw-rw-r-- 1 magnum magnum 8584562 Jul 22 04:21 libnativetask.a
> lrwxrwxrwx 1 magnum magnum  22 Jul 22 04:21 libnativetask.so -> 
> libnativetask.so.1.0.0
> -rwxrwxr-x 1 magnum magnum 3393065 Jul 22 04:21 libnativetask.so.1.0.0
> lrwxrwxrwx 1 magnum magnum  18 Jul  9 04:45 libsnappy.so -> 
> libsnappy.so.1.1.4
> lrwxrwxrwx 1 magnum magnum  18 Jul  9 04:45 libsnappy.so.1 -> 
> libsnappy.so.1.1.4
> -rwxr-xr-x 1 magnum magnum   23800 Jun 10  2014 libsnappy.so.1.1.4
> lrwxrwxrwx 1 magnum magnum  16 Jul  9 04:45 libzstd.so -> libzstd.so.1.4.0
> lrwxrwxrwx 1 magnum magnum  16 Jul  9 04:45 libzstd.so.1 -> 
> libzstd.so.1.4.0
> -rwxr-xr-x 1 magnum magnum  649784 Apr 29 16:58 libzstd.so.1.4.0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16441) if use -Dbundle.openssl=true, bundled with unnecessary libk5crypto.*

2019-07-22 Thread KWON BYUNGCHANG (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated HADOOP-16441:
-
Attachment: HADOOP-16441.002.patch

>  if use -Dbundle.openssl=true, bundled with unnecessary libk5crypto.*
> -
>
> Key: HADOOP-16441
> URL: https://issues.apache.org/jira/browse/HADOOP-16441
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Major
> Attachments: HADOOP-16441.001.patch, HADOOP-16441.002.patch
>
>
> If use -Dbundle.openssl to copy the contents of the openssl.lib directory 
> into the final tar file,  
> finally bundled with unnecessary libk5crypto.*
> build log
> {noformat}
> + bundle_native_lib true openssl.lib crypto /usr/lib64
> + declare bundleoption=true
> + declare liboption=openssl.lib
> + declare libpattern=crypto
> + declare libdir=/usr/lib64
> + echo 'Checking to bundle with:'
> + echo 'bundleoption=true, liboption=openssl.lib, pattern=crypto 
> libdir=/usr/lib64'
> + [[ true != \t\r\u\e ]]
> + [[ -z /usr/lib64 ]]
> + [[ ! -d /usr/lib64 ]]
> + cd /usr/lib64
> + cd 
> /home/magnum/hadoop-hdp/hadoop-common-project/hadoop-common/target/hadoop-common-3.1.1.3.1.2.3.1.0.0-78/lib/native/
> + tar xfBp -
> + tar cf - ./libcrypto.so ./libcrypto.so.10 ./libcrypto.so.1.0.2k 
> ./libk5crypto.so ./libk5crypto.so.3 ./libk5crypto.so.3.1
> {noformat}
>  
> bundled native library list
> {noformat}
> [magnum@0dabe9f5564d hadoop-hdp]$ ls -al 
> hadoop-dist/target/hadoop-3.1.1.3.1.2.3.1.0.0-78/lib/native/
> total 22704
> drwxrwxr-x 3 magnum magnum4096 Jul 22 04:22 .
> drwxrwxr-x 3 magnum magnum  20 Jul 22 04:30 ..
> drwxrwxr-x 2 magnum magnum  94 Jul 22 04:22 examples
> lrwxrwxrwx 1 magnum magnum  19 Jul  9 03:20 libcrypto.so -> 
> libcrypto.so.1.0.2k
> lrwxrwxrwx 1 magnum magnum  19 Jul  9 03:20 libcrypto.so.10 -> 
> libcrypto.so.1.0.2k
> -rwxr-xr-x 1 magnum magnum 2516624 Mar 12 10:12 libcrypto.so.1.0.2k
> -rw-rw-r-- 1 magnum magnum 1820202 Jul 22 04:13 libhadoop.a
> -rw-rw-r-- 1 magnum magnum 1607168 Jul 22 04:22 libhadooppipes.a
> lrwxrwxrwx 1 magnum magnum  18 Jul 22 04:13 libhadoop.so -> 
> libhadoop.so.1.0.0
> -rwxrwxr-x 1 magnum magnum 1026006 Jul 22 04:13 libhadoop.so.1.0.0
> -rw-rw-r-- 1 magnum magnum  475720 Jul 22 04:22 libhadooputils.a
> -rw-rw-r-- 1 magnum magnum  458600 Jul 22 04:16 libhdfs.a
> lrwxrwxrwx 1 magnum magnum  16 Jul 22 04:16 libhdfs.so -> libhdfs.so.0.0.0
> -rwxrwxr-x 1 magnum magnum  286052 Jul 22 04:16 libhdfs.so.0.0.0
> -rw-r--r-- 1 magnum magnum 1393974 Jul  9 04:47 libisal.a
> -rwxr-xr-x 1 magnum magnum 915 Jul  9 04:47 libisal.la
> lrwxrwxrwx 1 magnum magnum  17 Jul  9 04:47 libisal.so -> 
> libisal.so.2.0.27
> lrwxrwxrwx 1 magnum magnum  17 Jul  9 04:47 libisal.so.2 -> 
> libisal.so.2.0.27
> -rwxr-xr-x 1 magnum magnum  767778 Jul  9 04:47 libisal.so.2.0.27
> ==
> lrwxrwxrwx 1 magnum magnum  18 Aug  3  2018 libk5crypto.so -> 
> libk5crypto.so.3.1
> lrwxrwxrwx 1 magnum magnum  18 Aug  3  2018 libk5crypto.so.3 -> 
> libk5crypto.so.3.1
> -rwxr-xr-x 1 magnum magnum  210840 May  9  2018 libk5crypto.so.3.1
> ==
> -rw-rw-r-- 1 magnum magnum 8584562 Jul 22 04:21 libnativetask.a
> lrwxrwxrwx 1 magnum magnum  22 Jul 22 04:21 libnativetask.so -> 
> libnativetask.so.1.0.0
> -rwxrwxr-x 1 magnum magnum 3393065 Jul 22 04:21 libnativetask.so.1.0.0
> lrwxrwxrwx 1 magnum magnum  18 Jul  9 04:45 libsnappy.so -> 
> libsnappy.so.1.1.4
> lrwxrwxrwx 1 magnum magnum  18 Jul  9 04:45 libsnappy.so.1 -> 
> libsnappy.so.1.1.4
> -rwxr-xr-x 1 magnum magnum   23800 Jun 10  2014 libsnappy.so.1.1.4
> lrwxrwxrwx 1 magnum magnum  16 Jul  9 04:45 libzstd.so -> libzstd.so.1.4.0
> lrwxrwxrwx 1 magnum magnum  16 Jul  9 04:45 libzstd.so.1 -> 
> libzstd.so.1.4.0
> -rwxr-xr-x 1 magnum magnum  649784 Apr 29 16:58 libzstd.so.1.4.0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1141: HDDS-1845. OMVolumeSetQuota|OwnerRequest#validateAndUpdateCache return response.

2019-07-22 Thread GitBox
bharatviswa504 opened a new pull request #1141: HDDS-1845. 
OMVolumeSetQuota|OwnerRequest#validateAndUpdateCache return response.
URL: https://github.com/apache/hadoop/pull/1141
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1140: HDDS-1819. Implement S3 Commit MPU request to use Cache and DoubleBuffer.

2019-07-22 Thread GitBox
bharatviswa504 opened a new pull request #1140: HDDS-1819. Implement S3 Commit 
MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1140
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16398) Exports Hadoop metrics to Prometheus

2019-07-22 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890569#comment-16890569
 ] 

Akira Ajisaka commented on HADOOP-16398:


Hi [~anu] and [~adam.antal], would you check the latest patch?

> Exports Hadoop metrics to Prometheus
> 
>
> Key: HADOOP-16398
> URL: https://issues.apache.org/jira/browse/HADOOP-16398
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16398.001.patch, HADOOP-16398.002.patch
>
>
> Hadoop common side of HDDS-846. HDDS already have its own 
> PrometheusMetricsSink, so we can reuse the implementation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1139: HDDS-1846. Default value for checksum bytes is different in ozone-site.xml and code.

2019-07-22 Thread GitBox
bharatviswa504 opened a new pull request #1139: HDDS-1846. Default value for 
checksum bytes is different in ozone-site.xml and code.
URL: https://github.com/apache/hadoop/pull/1139
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16444) Updating incompatible issue

2019-07-22 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890557#comment-16890557
 ] 

Wei-Chiu Chuang commented on HADOOP-16444:
--

http://hbase.apache.org/book.html#basic.prerequisites
Take a look at HBase support matrix.
If you use 0.98.6 HBase on 3.2.0 Hadoop, I think this is way beyond what both 
communities are expected to support.

> Updating incompatible issue
> ---
>
> Key: HADOOP-16444
> URL: https://issues.apache.org/jira/browse/HADOOP-16444
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.1.2
>Reporter: xia0c
>Priority: Major
>  Labels: performance
>
> Hi,
> When I try to update hadoop-common to the latest version 3.2.0. I got an 
> incompatible issue with hbase. It works on version 2.5.0-cdh5.3.10.
> {code:java}
> public String getFoo()
> {
>   public void Test() throws Exception{
>   HBaseTestingUtility htu1 = new HBaseTestingUtility();
>   htu1.startMiniCluster();
>   }
> }
> {code}
> Thanks a lot



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-22 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890555#comment-16890555
 ] 

Wei-Chiu Chuang commented on HADOOP-16245:
--

After this patch, a user won't be able to configure LDAP keystore/truststore 
using Java command line options javax.net.ssl.keyStore / 
javax.net.ssl.trustStore. But this is nor a recommend way (not secure) nor is 
ever explicitly supported. I think we are good.

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch, HADOOP-16245.001.patch, 
> HADOOP-16245.002.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-22 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890555#comment-16890555
 ] 

Wei-Chiu Chuang edited comment on HADOOP-16245 at 7/22/19 11:57 PM:


After this patch, a user won't be able to configure LDAP keystore/truststore 
using Java command line options javax.net.ssl.keyStore / 
javax.net.ssl.trustStore. But this is neither a recommend way (not secure) nor 
is ever explicitly supported. I think we are good.


was (Author: jojochuang):
After this patch, a user won't be able to configure LDAP keystore/truststore 
using Java command line options javax.net.ssl.keyStore / 
javax.net.ssl.trustStore. But this is nor a recommend way (not secure) nor is 
ever explicitly supported. I think we are good.

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch, HADOOP-16245.001.patch, 
> HADOOP-16245.002.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #994: HDDS-1710. Publish JVM metrics via Hadoop metrics

2019-07-22 Thread GitBox
anuengineer closed pull request #994: HDDS-1710. Publish JVM metrics via Hadoop 
metrics
URL: https://github.com/apache/hadoop/pull/994
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #994: HDDS-1710. Publish JVM metrics via Hadoop metrics

2019-07-22 Thread GitBox
anuengineer commented on issue #994: HDDS-1710. Publish JVM metrics via Hadoop 
metrics
URL: https://github.com/apache/hadoop/pull/994#issuecomment-513996724
 
 
   Thank you for your contribution. I have committed this patch to trunk and 
ozone-0.4.1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1102: HDDS-1803. shellcheck.sh does not work on Mac

2019-07-22 Thread GitBox
anuengineer commented on issue #1102: HDDS-1803. shellcheck.sh does not work on 
Mac
URL: https://github.com/apache/hadoop/pull/1102#issuecomment-513991563
 
 
   Thank you for the patch. I have committed this to trunk and ozone-0.4.1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1102: HDDS-1803. shellcheck.sh does not work on Mac

2019-07-22 Thread GitBox
anuengineer closed pull request #1102: HDDS-1803. shellcheck.sh does not work 
on Mac
URL: https://github.com/apache/hadoop/pull/1102
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1105: HDDS-1799. Add goofyfs to the ozone-runner docker image

2019-07-22 Thread GitBox
anuengineer closed pull request #1105: HDDS-1799. Add goofyfs to the 
ozone-runner docker image
URL: https://github.com/apache/hadoop/pull/1105
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1105: HDDS-1799. Add goofyfs to the ozone-runner docker image

2019-07-22 Thread GitBox
anuengineer commented on issue #1105: HDDS-1799. Add goofyfs to the 
ozone-runner docker image
URL: https://github.com/apache/hadoop/pull/1105#issuecomment-513989458
 
 
   I have committed this patch to trunk and ozone-0.4.1. Thanks for your 
contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890531#comment-16890531
 ] 

Hadoop QA commented on HADOOP-16245:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
59s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16245 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12975438/HADOOP-16245.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d53716be4e95 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cdc36fe |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16405/testReport/ |
| Max. process+thread count | 1747 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16405/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Enabling SSL within 

[jira] [Commented] (HADOOP-16444) Updating incompatible issue

2019-07-22 Thread xia0c (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890530#comment-16890530
 ] 

xia0c commented on HADOOP-16444:


Sure. Thanks for your replying. The error message is here:

{noformat}
Demo(org.apache.hadoop.hbase.client.HadoopTest)  Time elapsed: 1.391 sec  <<< 
ERROR!
java.lang.NoSuchFieldError: LOG
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.initWebHdfs(NameNodeHttpServer.java:70)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:137)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:695)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:754)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:738)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1001)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:872)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:707)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:645)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:525)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:854)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:779)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:750)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:719)
at org.apache.hadoop.hbase.client.HadoopTest.Demo(HadoopTest.java:10)

{noformat}

I am using HBase 0.98.6-cdh5.3.0. 

Thanks



> Updating incompatible issue
> ---
>
> Key: HADOOP-16444
> URL: https://issues.apache.org/jira/browse/HADOOP-16444
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.1.2
>Reporter: xia0c
>Priority: Major
>  Labels: performance
>
> Hi,
> When I try to update hadoop-common to the latest version 3.2.0. I got an 
> incompatible issue with hbase. It works on version 2.5.0-cdh5.3.10.
> {code:java}
> public String getFoo()
> {
>   public void Test() throws Exception{
>   HBaseTestingUtility htu1 = new HBaseTestingUtility();
>   htu1.startMiniCluster();
>   }
> }
> {code}
> Thanks a lot



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16446) Rolling upgrade to Hadoop 3.2.0 breaks due to backward in-compatible change

2019-07-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890529#comment-16890529
 ] 

Eric Yang commented on HADOOP-16446:


[~lingchao] The root cause is the same as HADOOP-16444.

> Rolling upgrade to Hadoop 3.2.0 breaks due to backward in-compatible change
> ---
>
> Key: HADOOP-16446
> URL: https://issues.apache.org/jira/browse/HADOOP-16446
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: xia0c
>Priority: Major
>
> Hi,
> When I try to update Hadoop-common to the last version 3.2.0, it breaks 
> backward compatibility due to compile dependency change in commons.lang. This 
> also breaks rolling upgrades since any client implementing this - like Apache 
> Crunch. 
> -The following code will fail to run with the error 
> "java.lang.NoClassDefFoundError: 
> org/apache/commons/lang/SerializationException":
>   
> {code:java}
> public void Demo(){
> PCollection data = MemPipeline.typedCollectionOf(strings(), "a"); 
> }
> {code}
> Thanks a lot.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16444) Updating incompatible issue

2019-07-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890527#comment-16890527
 ] 

Eric Yang commented on HADOOP-16444:


[~lingchao] Application developer should not depend on Hadoop transitive 
dependencies to obtain HBase jar files.  This could be very fragile when Hadoop 
needs to evolve at its own pace to use a new version of HBase for it's own 
internal service like YARN Timeline service.  The recommended approach is to 
define dependency exclusions in pom.xml for your application to prevent 
sourcing Hadoop specific version of HBase, then define HBase dependency 
separately.  This would ensure your application gets the right version of HBase 
jar file and isolated from Hadoop internal.

{code}

  org.apache.hadoop
  hadoop-common
  3.2.0
  

  org.apache.hbase
  hbase-common

  

{code}

> Updating incompatible issue
> ---
>
> Key: HADOOP-16444
> URL: https://issues.apache.org/jira/browse/HADOOP-16444
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.1.2
>Reporter: xia0c
>Priority: Major
>  Labels: performance
>
> Hi,
> When I try to update hadoop-common to the latest version 3.2.0. I got an 
> incompatible issue with hbase. It works on version 2.5.0-cdh5.3.10.
> {code:java}
> public String getFoo()
> {
>   public void Test() throws Exception{
>   HBaseTestingUtility htu1 = new HBaseTestingUtility();
>   htu1.startMiniCluster();
>   }
> }
> {code}
> Thanks a lot



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16445) Allow separate custom signing algorithms for S3 and DDB

2019-07-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890520#comment-16890520
 ] 

Hadoop QA commented on HADOOP-16445:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 38 
new + 33 unchanged - 1 fixed = 71 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
33s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16445 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12975440/HADOOP-16445.01.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 394188791d26 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c958edd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16406/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16406/testReport/ |
| Max. process+thread count | 447 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16406/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was 

[GitHub] [hadoop] anuengineer closed pull request #1064: HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

2019-07-22 Thread GitBox
anuengineer closed pull request #1064: HDDS-1585. Add LICENSE.txt and 
NOTICE.txt to Ozone Recon Web
URL: https://github.com/apache/hadoop/pull/1064
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 merged pull request #966: HDDS-1686. Remove check to get from openKeyTable in acl implementatio…

2019-07-22 Thread GitBox
arp7 merged pull request #966: HDDS-1686. Remove check to get from openKeyTable 
in acl implementatio…
URL: https://github.com/apache/hadoop/pull/966
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #966: HDDS-1686. Remove check to get from openKeyTable in acl implementatio…

2019-07-22 Thread GitBox
xiaoyuyao commented on issue #966: HDDS-1686. Remove check to get from 
openKeyTable in acl implementatio…
URL: https://github.com/apache/hadoop/pull/966#issuecomment-513973879
 
 
   +1 from me too. Thanks @bharatviswa504 and @arp7 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16446) Rolling upgrade to Hadoop 3.2.0 breaks due to backward in-compatible change

2019-07-22 Thread xia0c (JIRA)
xia0c created HADOOP-16446:
--

 Summary: Rolling upgrade to Hadoop 3.2.0 breaks due to backward 
in-compatible change
 Key: HADOOP-16446
 URL: https://issues.apache.org/jira/browse/HADOOP-16446
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.2.0
Reporter: xia0c


Hi,

When I try to update Hadoop-common to the last version 3.2.0, it breaks 
backward compatibility due to compile dependency change in commons.lang. This 
also breaks rolling upgrades since any client implementing this - like Apache 
Crunch. 

-The following code will fail to run with the error 
"java.lang.NoClassDefFoundError: 
org/apache/commons/lang/SerializationException":
  
{code:java}
public void Demo(){
PCollection data = MemPipeline.typedCollectionOf(strings(), "a");   
}
{code}

Thanks a lot.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16444) Updating incompatible issue

2019-07-22 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890497#comment-16890497
 ] 

Wei-Chiu Chuang commented on HADOOP-16444:
--

Hi, thanks for reporting the issue.
Would you mind to share the error message? Does it fail to compile? Or does it 
have runtime exceptions? What is the HBase version?
The last time I tried, HBase (master branch) runs with Hadoop 3.2.0.

Also note: 2.5.0-cdh5.3.10 is a CDH version string. There is a huge gap between 
that version and Apache Hadoop 3.2.0.

Thanks

> Updating incompatible issue
> ---
>
> Key: HADOOP-16444
> URL: https://issues.apache.org/jira/browse/HADOOP-16444
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.1.2
>Reporter: xia0c
>Priority: Major
>  Labels: performance
>
> Hi,
> When I try to update hadoop-common to the latest version 3.2.0. I got an 
> incompatible issue with hbase. It works on version 2.5.0-cdh5.3.10.
> {code:java}
> public String getFoo()
> {
>   public void Test() throws Exception{
>   HBaseTestingUtility htu1 = new HBaseTestingUtility();
>   htu1.startMiniCluster();
>   }
> }
> {code}
> Thanks a lot



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16445) Allow separate custom signing algorithms for S3 and DDB

2019-07-22 Thread Siddharth Seth (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HADOOP-16445:

Status: Patch Available  (was: Open)

The patch allows for separate signing algorithms to be used for S3 and DDB. 
(Documentation on usage in the patch)

Also, a non standard signer cannot be used without registering it with the 
Amazon SDK. The patch allows for such non standard signers to be registered.

 

[~ste...@apache.org], [~mackrorysd] - could you please take a look when you get 
a chance.

> Allow separate custom signing algorithms for S3 and DDB
> ---
>
> Key: HADOOP-16445
> URL: https://issues.apache.org/jira/browse/HADOOP-16445
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Major
> Attachments: HADOOP-16445.01.patch
>
>
> fs.s3a.signing-algorithm allows overriding the signer. This applies to both 
> the S3 and DDB clients. Need to be able to specify separate signing algorithm 
> overrides for S3 and DDB.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16445) Allow separate custom signing algorithms for S3 and DDB

2019-07-22 Thread Siddharth Seth (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HADOOP-16445:

Attachment: HADOOP-16445.01.patch

> Allow separate custom signing algorithms for S3 and DDB
> ---
>
> Key: HADOOP-16445
> URL: https://issues.apache.org/jira/browse/HADOOP-16445
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Major
> Attachments: HADOOP-16445.01.patch
>
>
> fs.s3a.signing-algorithm allows overriding the signer. This applies to both 
> the S3 and DDB clients. Need to be able to specify separate signing algorithm 
> overrides for S3 and DDB.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16445) Allow separate custom signing algorithms for S3 and DDB

2019-07-22 Thread Siddharth Seth (JIRA)
Siddharth Seth created HADOOP-16445:
---

 Summary: Allow separate custom signing algorithms for S3 and DDB
 Key: HADOOP-16445
 URL: https://issues.apache.org/jira/browse/HADOOP-16445
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Siddharth Seth
Assignee: Siddharth Seth


fs.s3a.signing-algorithm allows overriding the signer. This applies to both the 
S3 and DDB clients. Need to be able to specify separate signing algorithm 
overrides for S3 and DDB.

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1118: HDDS-1811. Prometheus metrics are broken

2019-07-22 Thread GitBox
anuengineer commented on issue #1118: HDDS-1811. Prometheus metrics are broken
URL: https://github.com/apache/hadoop/pull/1118#issuecomment-513964139
 
 
   Thank you for your contribution. I have committed this patch to trunk and 
0.4.1 branch. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1118: HDDS-1811. Prometheus metrics are broken

2019-07-22 Thread GitBox
anuengineer closed pull request #1118: HDDS-1811. Prometheus metrics are broken
URL: https://github.com/apache/hadoop/pull/1118
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-22 Thread GitBox
hadoop-yetus commented on issue #1134: HADOOP-16433. S3Guard: Filter expired 
entries and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#issuecomment-513962994
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1112 | trunk passed |
   | +1 | compile | 34 | trunk passed |
   | +1 | checkstyle | 20 | trunk passed |
   | +1 | mvnsite | 40 | trunk passed |
   | +1 | shadedclient | 715 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | trunk passed |
   | 0 | spotbugs | 60 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 31 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -0 | checkstyle | 19 | hadoop-tools/hadoop-aws: The patch generated 1 new 
+ 33 unchanged - 0 fixed = 34 total (was 33) |
   | +1 | mvnsite | 34 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 694 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 18 | the patch passed |
   | -1 | findbugs | 62 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 279 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3329 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.ttlTimeProvider; locked 75% 
of time  Unsynchronized access at LocalMetadataStore.java:75% of time  
Unsynchronized access at LocalMetadataStore.java:[line 623] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1134/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1134 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3cb8b901c136 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cdc36fe |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1134/5/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1134/5/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1134/5/testReport/ |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1134/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-22 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890476#comment-16890476
 ] 

Chen Liang commented on HADOOP-16245:
-

Thanks Erik, +1 to v002 patch.

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch, HADOOP-16245.001.patch, 
> HADOOP-16245.002.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-22 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890471#comment-16890471
 ] 

Erik Krogen commented on HADOOP-16245:
--

Thanks [~vagarychen]! I expanded the description a bit more. Let me know what 
you think.

I was unable to reproduce the TestIPC failure locally; I believe it is 
unrelated.

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch, HADOOP-16245.001.patch, 
> HADOOP-16245.002.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-22 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16245:
-
Attachment: HADOOP-16245.002.patch

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch, HADOOP-16245.001.patch, 
> HADOOP-16245.002.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1123: HADOOP-16380 S3Guard to determine empty directory status for all non-root directories

2019-07-22 Thread GitBox
hadoop-yetus commented on issue #1123: HADOOP-16380 S3Guard to determine empty 
directory status for all non-root directories
URL: https://github.com/apache/hadoop/pull/1123#issuecomment-513948460
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 92 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 7 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 70 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1213 | trunk passed |
   | +1 | compile | 1007 | trunk passed |
   | +1 | checkstyle | 147 | trunk passed |
   | +1 | mvnsite | 126 | trunk passed |
   | +1 | shadedclient | 1042 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 110 | trunk passed |
   | 0 | spotbugs | 71 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 203 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 84 | the patch passed |
   | +1 | compile | 1057 | the patch passed |
   | +1 | javac | 1057 | the patch passed |
   | -0 | checkstyle | 149 | root: The patch generated 1 new + 36 unchanged - 0 
fixed = 37 total (was 36) |
   | +1 | mvnsite | 124 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 736 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 97 | the patch passed |
   | +1 | findbugs | 215 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 562 | hadoop-common in the patch passed. |
   | +1 | unit | 308 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 57 | The patch does not generate ASF License warnings. |
   | | | 7430 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1123/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1123 |
   | Optional Tests | dupname asflicense mvnsite compile javac javadoc 
mvninstall unit shadedclient findbugs checkstyle |
   | uname | Linux 784008921b11 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 340bbaf |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1123/6/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1123/6/testReport/ |
   | Max. process+thread count | 1346 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1123/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-22 Thread GitBox
bgaborg commented on a change in pull request #1134: HADOOP-16433. S3Guard: 
Filter expired entries and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#discussion_r306023290
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestDirListingMetadata.java
 ##
 @@ -291,6 +293,38 @@ public void testRemoveNotChild() {
 meta.remove(new Path("/different/ancestor"));
   }
 
+
+  @Test
+  public void testExpiredEntriesFromListing() {
 
 Review comment:
   yes, this should be removeExpiredEntriesFromListing. Todo: update it!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-22 Thread GitBox
bgaborg commented on a change in pull request #1134: HADOOP-16433. S3Guard: 
Filter expired entries and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#discussion_r306022775
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardTtl.java
 ##
 @@ -286,4 +290,70 @@ public void testCreateParentHasTombstone() throws 
Exception {
 }
   }
 
+  /**
+   * Test that listing is filtering expired items, so
 
 Review comment:
   Finish this comment


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-22 Thread GitBox
bgaborg commented on a change in pull request #1134: HADOOP-16433. S3Guard: 
Filter expired entries and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#discussion_r306022959
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestDirListingMetadata.java
 ##
 @@ -291,6 +293,38 @@ public void testRemoveNotChild() {
 meta.remove(new Path("/different/ancestor"));
   }
 
+
+  @Test
+  public void testExpiredEntriesFromListing() {
 
 Review comment:
   Not the best test name, maybe testRemoveExpiredEntriesFromListing?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-22 Thread GitBox
bgaborg commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries 
and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#issuecomment-513944209
 
 
   somehow yetus hates me now. this is new.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16444) Updating incompatible issue

2019-07-22 Thread xia0c (JIRA)
xia0c created HADOOP-16444:
--

 Summary: Updating incompatible issue
 Key: HADOOP-16444
 URL: https://issues.apache.org/jira/browse/HADOOP-16444
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 3.1.2
Reporter: xia0c


Hi,

When I try to update hadoop-common to the latest version 3.2.0. I got an 
incompatible issue with hbase. It works on version 2.5.0-cdh5.3.10.

{code:java}
public String getFoo()
{
public void Test() throws Exception{
HBaseTestingUtility htu1 = new HBaseTestingUtility();
htu1.startMiniCluster();
}
}
{code}

Thanks a lot



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1130: HDDS-1827. Load Snapshot info when OM Ratis server starts.

2019-07-22 Thread GitBox
xiaoyuyao commented on a change in pull request #1130: HDDS-1827. Load Snapshot 
info when OM Ratis server starts.
URL: https://github.com/apache/hadoop/pull/1130#discussion_r306016722
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
 ##
 @@ -126,6 +130,24 @@ public void testStartOMRatisServer() throws Exception {
 LifeCycle.State.RUNNING, omRatisServer.getServerState());
   }
 
+  @Test
+  public void testLoadSnapshotInfoOnStart() throws Exception {
+// Stop the Ratis server and manually update the snapshotInfo.
+long oldSnaphsotIndex = ozoneManager.saveRatisSnapshot();
+ozoneManager.getSnapshotInfo().saveRatisSnapshotToDisk(oldSnaphsotIndex);
+omRatisServer.stop();
+long newSnapshotIndex = oldSnaphsotIndex + 100;
+ozoneManager.getSnapshotInfo().saveRatisSnapshotToDisk(newSnapshotIndex);
+
+// Start new Ratis server. It should pick up and load the new SnapshotInfo
+omRatisServer = OzoneManagerRatisServer.newOMRatisServer(conf, 
ozoneManager,
+omNodeDetails, Collections.emptyList());
+omRatisServer.start();
 
 Review comment:
   Should we stop the omRatisServer after the test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16443) Improve help text for setfacl --set option

2019-07-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890434#comment-16890434
 ] 

Hadoop QA commented on HADOOP-16443:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
35s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16443 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12975421/HADOOP-16443.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c9012734bc31 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 340bbaf |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16403/testReport/ |
| Max. process+thread count | 1380 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16403/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Improve help text for setfacl 

[jira] [Commented] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890426#comment-16890426
 ] 

Hadoop QA commented on HADOOP-16245:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 44s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestIPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16245 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12975423/HADOOP-16245.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c96e7a4763c4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 340bbaf |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16404/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16404/testReport/ |
| Max. process+thread count | 1402 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[GitHub] [hadoop] arp7 commented on issue #960: HDDS-1679. debug patch

2019-07-22 Thread GitBox
arp7 commented on issue #960: HDDS-1679. debug patch
URL: https://github.com/apache/hadoop/pull/960#issuecomment-513915383
 
 
   @mukul1987 do we want the debug patch to be merged? Or was this just for the 
pre-commit run?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 merged pull request #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-22 Thread GitBox
arp7 merged pull request #948: HDDS-1649. On installSnapshot notification from 
OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-07-22 Thread GitBox
arp7 commented on issue #948: HDDS-1649. On installSnapshot notification from 
OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#issuecomment-513913564
 
 
   I am merging this with couple of caveats.
   1. There are numerous integration test failures. However these tests also 
fail in current trunk, so they are likely unrelated.
   1. More seriously the integration test 
`TestOzoneManagerSnapshotProvider.testDownloadCheckpoint` which exercises 
related functionality failed in the pre-commit run. It passes for me locally 
with the patch applied. So this is likely a flaky test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-22 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890353#comment-16890353
 ] 

Chen Liang commented on HADOOP-16245:
-

Thanks for fixing this. v001 patch LGTM.

Just one nit: For future references, maybe we should document with more detail 
on why we are doing it this way. e.g. why a static class is enough in this case 
(like you mentioned); the static class has the same behaviour as the 
SSLSocketFactory class (based on my understanding); new code should be aware of 
the static fields so what Daryn mentioned could be avoided for future code 
changes. With this addressed, +1 from me.

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch, HADOOP-16245.001.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on issue #1138: HADOOP-16443 Improve help text for setfacl --set option

2019-07-22 Thread GitBox
jojochuang commented on issue #1138: HADOOP-16443 Improve help text for setfacl 
--set option
URL: https://github.com/apache/hadoop/pull/1138#issuecomment-513887300
 
 
   LGTM. The updated text comes from the javadoc of 
`AclTransformation#replaceAclEntries()`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #1135: HDDS-1840. Fix TestSecureOzoneContainer.

2019-07-22 Thread GitBox
arp7 commented on issue #1135: HDDS-1840. Fix TestSecureOzoneContainer.
URL: https://github.com/apache/hadoop/pull/1135#issuecomment-513876258
 
 
   +1 thanks for the review @adoroszlai  and thanks @bharatviswa504 for fixing 
this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 merged pull request #1135: HDDS-1840. Fix TestSecureOzoneContainer.

2019-07-22 Thread GitBox
arp7 merged pull request #1135: HDDS-1840. Fix TestSecureOzoneContainer.
URL: https://github.com/apache/hadoop/pull/1135
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-22 Thread GitBox
bgaborg commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries 
and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#issuecomment-513875374
 
 
   itested against ireland with no unknown issues.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16443) Improve help text for setfacl --set option

2019-07-22 Thread Stephen O'Donnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890311#comment-16890311
 ] 

Stephen O'Donnell commented on HADOOP-16443:


No problem, I have created the PR.

> Improve help text for setfacl --set option
> --
>
> Key: HADOOP-16443
> URL: https://issues.apache.org/jira/browse/HADOOP-16443
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HADOOP-16443.001.patch
>
>
> The help text associated with the command "setfacl --set" states:
> {quote}
>  --set   Fully replace the ACL, discarding all existing entries. The  
>   
>    must include entries for user, group, and others for 
>    
>   compatibility with permission bits.   
> {quote}
> However the actual behaviour is a bit more subtle:
> {quote}
> If the ACL spec contains only access entries, then the existing default 
> entries are retained. If the ACL spec contains only default entries, then the 
> existing access entries are retained. If the ACL spec contains both access 
> and default entries, then both are replaced.
> {quote}
> This Jira will improve the help text to more align with the expected 
> behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel opened a new pull request #1138: HADOOP-16443 Improve help text for setfacl --set option

2019-07-22 Thread GitBox
sodonnel opened a new pull request #1138: HADOOP-16443 Improve help text for 
setfacl --set option
URL: https://github.com/apache/hadoop/pull/1138
 
 
   The help text associated with the command "setfacl --set" states:
   
   ```
   --set Fully replace the ACL, discarding all existing entries. The
 must include entries for user, group, and others for 
   
compatibility with permission bits.   
   ```
   However the actual behaviour is a bit more subtle:
   
   >If the ACL spec contains only access entries, then the existing default 
entries are retained. If the ACL spec contains only default entries, then the 
existing access entries are retained. If the ACL spec contains both access and 
default entries, then both are replaced.
   
   This PR will improve the help text to more align with the expected behaviour.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16441) if use -Dbundle.openssl=true, bundled with unnecessary libk5crypto.*

2019-07-22 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki reassigned HADOOP-16441:
-

Assignee: KWON BYUNGCHANG

>  if use -Dbundle.openssl=true, bundled with unnecessary libk5crypto.*
> -
>
> Key: HADOOP-16441
> URL: https://issues.apache.org/jira/browse/HADOOP-16441
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Major
> Attachments: HADOOP-16441.001.patch
>
>
> If use -Dbundle.openssl to copy the contents of the openssl.lib directory 
> into the final tar file,  
> finally bundled with unnecessary libk5crypto.*
> build log
> {noformat}
> + bundle_native_lib true openssl.lib crypto /usr/lib64
> + declare bundleoption=true
> + declare liboption=openssl.lib
> + declare libpattern=crypto
> + declare libdir=/usr/lib64
> + echo 'Checking to bundle with:'
> + echo 'bundleoption=true, liboption=openssl.lib, pattern=crypto 
> libdir=/usr/lib64'
> + [[ true != \t\r\u\e ]]
> + [[ -z /usr/lib64 ]]
> + [[ ! -d /usr/lib64 ]]
> + cd /usr/lib64
> + cd 
> /home/magnum/hadoop-hdp/hadoop-common-project/hadoop-common/target/hadoop-common-3.1.1.3.1.2.3.1.0.0-78/lib/native/
> + tar xfBp -
> + tar cf - ./libcrypto.so ./libcrypto.so.10 ./libcrypto.so.1.0.2k 
> ./libk5crypto.so ./libk5crypto.so.3 ./libk5crypto.so.3.1
> {noformat}
>  
> bundled native library list
> {noformat}
> [magnum@0dabe9f5564d hadoop-hdp]$ ls -al 
> hadoop-dist/target/hadoop-3.1.1.3.1.2.3.1.0.0-78/lib/native/
> total 22704
> drwxrwxr-x 3 magnum magnum4096 Jul 22 04:22 .
> drwxrwxr-x 3 magnum magnum  20 Jul 22 04:30 ..
> drwxrwxr-x 2 magnum magnum  94 Jul 22 04:22 examples
> lrwxrwxrwx 1 magnum magnum  19 Jul  9 03:20 libcrypto.so -> 
> libcrypto.so.1.0.2k
> lrwxrwxrwx 1 magnum magnum  19 Jul  9 03:20 libcrypto.so.10 -> 
> libcrypto.so.1.0.2k
> -rwxr-xr-x 1 magnum magnum 2516624 Mar 12 10:12 libcrypto.so.1.0.2k
> -rw-rw-r-- 1 magnum magnum 1820202 Jul 22 04:13 libhadoop.a
> -rw-rw-r-- 1 magnum magnum 1607168 Jul 22 04:22 libhadooppipes.a
> lrwxrwxrwx 1 magnum magnum  18 Jul 22 04:13 libhadoop.so -> 
> libhadoop.so.1.0.0
> -rwxrwxr-x 1 magnum magnum 1026006 Jul 22 04:13 libhadoop.so.1.0.0
> -rw-rw-r-- 1 magnum magnum  475720 Jul 22 04:22 libhadooputils.a
> -rw-rw-r-- 1 magnum magnum  458600 Jul 22 04:16 libhdfs.a
> lrwxrwxrwx 1 magnum magnum  16 Jul 22 04:16 libhdfs.so -> libhdfs.so.0.0.0
> -rwxrwxr-x 1 magnum magnum  286052 Jul 22 04:16 libhdfs.so.0.0.0
> -rw-r--r-- 1 magnum magnum 1393974 Jul  9 04:47 libisal.a
> -rwxr-xr-x 1 magnum magnum 915 Jul  9 04:47 libisal.la
> lrwxrwxrwx 1 magnum magnum  17 Jul  9 04:47 libisal.so -> 
> libisal.so.2.0.27
> lrwxrwxrwx 1 magnum magnum  17 Jul  9 04:47 libisal.so.2 -> 
> libisal.so.2.0.27
> -rwxr-xr-x 1 magnum magnum  767778 Jul  9 04:47 libisal.so.2.0.27
> ==
> lrwxrwxrwx 1 magnum magnum  18 Aug  3  2018 libk5crypto.so -> 
> libk5crypto.so.3.1
> lrwxrwxrwx 1 magnum magnum  18 Aug  3  2018 libk5crypto.so.3 -> 
> libk5crypto.so.3.1
> -rwxr-xr-x 1 magnum magnum  210840 May  9  2018 libk5crypto.so.3.1
> ==
> -rw-rw-r-- 1 magnum magnum 8584562 Jul 22 04:21 libnativetask.a
> lrwxrwxrwx 1 magnum magnum  22 Jul 22 04:21 libnativetask.so -> 
> libnativetask.so.1.0.0
> -rwxrwxr-x 1 magnum magnum 3393065 Jul 22 04:21 libnativetask.so.1.0.0
> lrwxrwxrwx 1 magnum magnum  18 Jul  9 04:45 libsnappy.so -> 
> libsnappy.so.1.1.4
> lrwxrwxrwx 1 magnum magnum  18 Jul  9 04:45 libsnappy.so.1 -> 
> libsnappy.so.1.1.4
> -rwxr-xr-x 1 magnum magnum   23800 Jun 10  2014 libsnappy.so.1.1.4
> lrwxrwxrwx 1 magnum magnum  16 Jul  9 04:45 libzstd.so -> libzstd.so.1.4.0
> lrwxrwxrwx 1 magnum magnum  16 Jul  9 04:45 libzstd.so.1 -> 
> libzstd.so.1.4.0
> -rwxr-xr-x 1 magnum magnum  649784 Apr 29 16:58 libzstd.so.1.4.0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16443) Improve help text for setfacl --set option

2019-07-22 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890298#comment-16890298
 ] 

Steve Loughran commented on HADOOP-16443:
-

Stephen, we're slowly moving to github for reviewing. Could you submit a PR 
there with this JIRA ID in the title? thanks

> Improve help text for setfacl --set option
> --
>
> Key: HADOOP-16443
> URL: https://issues.apache.org/jira/browse/HADOOP-16443
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HADOOP-16443.001.patch
>
>
> The help text associated with the command "setfacl --set" states:
> {quote}
>  --set   Fully replace the ACL, discarding all existing entries. The  
>   
>    must include entries for user, group, and others for 
>    
>   compatibility with permission bits.   
> {quote}
> However the actual behaviour is a bit more subtle:
> {quote}
> If the ACL spec contains only access entries, then the existing default 
> entries are retained. If the ACL spec contains only default entries, then the 
> existing access entries are retained. If the ACL spec contains both access 
> and default entries, then both are replaced.
> {quote}
> This Jira will improve the help text to more align with the expected 
> behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16441) if use -Dbundle.openssl=true, bundled with unnecessary libk5crypto.*

2019-07-22 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890296#comment-16890296
 ] 

Masatake Iwasaki commented on HADOOP-16441:
---

Thanks for working on this, [~magnum].
{noformat}
+set -x
{noformat}
Is this intentional change? This looks like temporary one for your debugging.

The patch looks good to me overall. Since I'm using CentOS for development too, 
it would be nice to get comments from someone using another Linux distro like 
Ubuntu, while I think they have same naming convention for lib files.

>  if use -Dbundle.openssl=true, bundled with unnecessary libk5crypto.*
> -
>
> Key: HADOOP-16441
> URL: https://issues.apache.org/jira/browse/HADOOP-16441
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Priority: Major
> Attachments: HADOOP-16441.001.patch
>
>
> If use -Dbundle.openssl to copy the contents of the openssl.lib directory 
> into the final tar file,  
> finally bundled with unnecessary libk5crypto.*
> build log
> {noformat}
> + bundle_native_lib true openssl.lib crypto /usr/lib64
> + declare bundleoption=true
> + declare liboption=openssl.lib
> + declare libpattern=crypto
> + declare libdir=/usr/lib64
> + echo 'Checking to bundle with:'
> + echo 'bundleoption=true, liboption=openssl.lib, pattern=crypto 
> libdir=/usr/lib64'
> + [[ true != \t\r\u\e ]]
> + [[ -z /usr/lib64 ]]
> + [[ ! -d /usr/lib64 ]]
> + cd /usr/lib64
> + cd 
> /home/magnum/hadoop-hdp/hadoop-common-project/hadoop-common/target/hadoop-common-3.1.1.3.1.2.3.1.0.0-78/lib/native/
> + tar xfBp -
> + tar cf - ./libcrypto.so ./libcrypto.so.10 ./libcrypto.so.1.0.2k 
> ./libk5crypto.so ./libk5crypto.so.3 ./libk5crypto.so.3.1
> {noformat}
>  
> bundled native library list
> {noformat}
> [magnum@0dabe9f5564d hadoop-hdp]$ ls -al 
> hadoop-dist/target/hadoop-3.1.1.3.1.2.3.1.0.0-78/lib/native/
> total 22704
> drwxrwxr-x 3 magnum magnum4096 Jul 22 04:22 .
> drwxrwxr-x 3 magnum magnum  20 Jul 22 04:30 ..
> drwxrwxr-x 2 magnum magnum  94 Jul 22 04:22 examples
> lrwxrwxrwx 1 magnum magnum  19 Jul  9 03:20 libcrypto.so -> 
> libcrypto.so.1.0.2k
> lrwxrwxrwx 1 magnum magnum  19 Jul  9 03:20 libcrypto.so.10 -> 
> libcrypto.so.1.0.2k
> -rwxr-xr-x 1 magnum magnum 2516624 Mar 12 10:12 libcrypto.so.1.0.2k
> -rw-rw-r-- 1 magnum magnum 1820202 Jul 22 04:13 libhadoop.a
> -rw-rw-r-- 1 magnum magnum 1607168 Jul 22 04:22 libhadooppipes.a
> lrwxrwxrwx 1 magnum magnum  18 Jul 22 04:13 libhadoop.so -> 
> libhadoop.so.1.0.0
> -rwxrwxr-x 1 magnum magnum 1026006 Jul 22 04:13 libhadoop.so.1.0.0
> -rw-rw-r-- 1 magnum magnum  475720 Jul 22 04:22 libhadooputils.a
> -rw-rw-r-- 1 magnum magnum  458600 Jul 22 04:16 libhdfs.a
> lrwxrwxrwx 1 magnum magnum  16 Jul 22 04:16 libhdfs.so -> libhdfs.so.0.0.0
> -rwxrwxr-x 1 magnum magnum  286052 Jul 22 04:16 libhdfs.so.0.0.0
> -rw-r--r-- 1 magnum magnum 1393974 Jul  9 04:47 libisal.a
> -rwxr-xr-x 1 magnum magnum 915 Jul  9 04:47 libisal.la
> lrwxrwxrwx 1 magnum magnum  17 Jul  9 04:47 libisal.so -> 
> libisal.so.2.0.27
> lrwxrwxrwx 1 magnum magnum  17 Jul  9 04:47 libisal.so.2 -> 
> libisal.so.2.0.27
> -rwxr-xr-x 1 magnum magnum  767778 Jul  9 04:47 libisal.so.2.0.27
> ==
> lrwxrwxrwx 1 magnum magnum  18 Aug  3  2018 libk5crypto.so -> 
> libk5crypto.so.3.1
> lrwxrwxrwx 1 magnum magnum  18 Aug  3  2018 libk5crypto.so.3 -> 
> libk5crypto.so.3.1
> -rwxr-xr-x 1 magnum magnum  210840 May  9  2018 libk5crypto.so.3.1
> ==
> -rw-rw-r-- 1 magnum magnum 8584562 Jul 22 04:21 libnativetask.a
> lrwxrwxrwx 1 magnum magnum  22 Jul 22 04:21 libnativetask.so -> 
> libnativetask.so.1.0.0
> -rwxrwxr-x 1 magnum magnum 3393065 Jul 22 04:21 libnativetask.so.1.0.0
> lrwxrwxrwx 1 magnum magnum  18 Jul  9 04:45 libsnappy.so -> 
> libsnappy.so.1.1.4
> lrwxrwxrwx 1 magnum magnum  18 Jul  9 04:45 libsnappy.so.1 -> 
> libsnappy.so.1.1.4
> -rwxr-xr-x 1 magnum magnum   23800 Jun 10  2014 libsnappy.so.1.1.4
> lrwxrwxrwx 1 magnum magnum  16 Jul  9 04:45 libzstd.so -> libzstd.so.1.4.0
> lrwxrwxrwx 1 magnum magnum  16 Jul  9 04:45 libzstd.so.1 -> 
> libzstd.so.1.4.0
> -rwxr-xr-x 1 magnum magnum  649784 Apr 29 16:58 libzstd.so.1.4.0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-22 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890292#comment-16890292
 ] 

Erik Krogen commented on HADOOP-16245:
--

I've now tested this on one of our live clusters and confirmed that I was able 
to configure {{LdapGroupsMapping}} without negatively impacting other SSL 
connections, fixing the issue discussed here.

I rebased the patch and cleaned up the documentation for v001. I think it 
should be ready for commit now. I'm can't think of a good way to test this in a 
unit test so I haven't added one for now.

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch, HADOOP-16245.001.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16245) Enabling SSL within LdapGroupsMapping can break system SSL configs

2019-07-22 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16245:
-
Attachment: HADOOP-16245.001.patch

> Enabling SSL within LdapGroupsMapping can break system SSL configs
> --
>
> Key: HADOOP-16245
> URL: https://issues.apache.org/jira/browse/HADOOP-16245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.9.1, 2.8.4, 2.7.6, 3.1.1, 3.0.3
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16245.000.patch, HADOOP-16245.001.patch
>
>
> When debugging an issue where one of our server components was unable to 
> communicate with other components via SSL, we realized that LdapGroupsMapping 
> sets its SSL configurations globally, rather than scoping them to the HTTP 
> clients it creates.
> {code:title=LdapGroupsMapping}
>   DirContext getDirContext() throws NamingException {
> if (ctx == null) {
>   // Set up the initial environment for LDAP connectivity
>   Hashtable env = new Hashtable();
>   env.put(Context.INITIAL_CONTEXT_FACTORY,
>   com.sun.jndi.ldap.LdapCtxFactory.class.getName());
>   env.put(Context.PROVIDER_URL, ldapUrl);
>   env.put(Context.SECURITY_AUTHENTICATION, "simple");
>   // Set up SSL security, if necessary
>   if (useSsl) {
> env.put(Context.SECURITY_PROTOCOL, "ssl");
> if (!keystore.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStore", keystore);
> }
> if (!keystorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.keyStorePassword", keystorePass);
> }
> if (!truststore.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStore", truststore);
> }
> if (!truststorePass.isEmpty()) {
>   System.setProperty("javax.net.ssl.trustStorePassword",
>   truststorePass);
> }
>   }
>   env.put(Context.SECURITY_PRINCIPAL, bindUser);
>   env.put(Context.SECURITY_CREDENTIALS, bindPassword);
>   env.put("com.sun.jndi.ldap.connect.timeout", 
> conf.get(CONNECTION_TIMEOUT,
>   String.valueOf(CONNECTION_TIMEOUT_DEFAULT)));
>   env.put("com.sun.jndi.ldap.read.timeout", conf.get(READ_TIMEOUT,
>   String.valueOf(READ_TIMEOUT_DEFAULT)));
>   ctx = new InitialDirContext(env);
> }
> {code}
> Notice the {{System.setProperty()}} calls, which will change settings 
> JVM-wide. This causes issues for other SSL clients, which may rely on the 
> default JVM truststore being used. This behavior was initially introduced by 
> HADOOP-8121, and extended to include the truststore configurations in 
> HADOOP-12862.
> The correct approach is to use a mechanism which is scoped to the LDAP 
> requests only. The right approach appears to be to use the 
> {{java.naming.ldap.factory.socket}} parameter to set the socket factory to a 
> custom SSL socket factory which correctly sets the key and trust store 
> parameters. See an example [here|https://stackoverflow.com/a/4615497/4979203].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1134: HADOOP-16433. S3Guard: Filter expired entries and tombstones when lis…

2019-07-22 Thread GitBox
hadoop-yetus commented on issue #1134: HADOOP-16433. S3Guard: Filter expired 
entries and tombstones when lis…
URL: https://github.com/apache/hadoop/pull/1134#issuecomment-513847726
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 14 | https://github.com/apache/hadoop/pull/1134 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1134/4/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16443) Improve help text for setfacl --set option

2019-07-22 Thread Stephen O'Donnell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HADOOP-16443:
---
Status: Patch Available  (was: Open)

> Improve help text for setfacl --set option
> --
>
> Key: HADOOP-16443
> URL: https://issues.apache.org/jira/browse/HADOOP-16443
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HADOOP-16443.001.patch
>
>
> The help text associated with the command "setfacl --set" states:
> {quote}
>  --set   Fully replace the ACL, discarding all existing entries. The  
>   
>    must include entries for user, group, and others for 
>    
>   compatibility with permission bits.   
> {quote}
> However the actual behaviour is a bit more subtle:
> {quote}
> If the ACL spec contains only access entries, then the existing default 
> entries are retained. If the ACL spec contains only default entries, then the 
> existing access entries are retained. If the ACL spec contains both access 
> and default entries, then both are replaced.
> {quote}
> This Jira will improve the help text to more align with the expected 
> behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16443) Improve help text for setfacl --set option

2019-07-22 Thread Stephen O'Donnell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HADOOP-16443:
---
Attachment: HADOOP-16443.001.patch

> Improve help text for setfacl --set option
> --
>
> Key: HADOOP-16443
> URL: https://issues.apache.org/jira/browse/HADOOP-16443
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HADOOP-16443.001.patch
>
>
> The help text associated with the command "setfacl --set" states:
> {quote}
>  --set   Fully replace the ACL, discarding all existing entries. The  
>   
>    must include entries for user, group, and others for 
>    
>   compatibility with permission bits.   
> {quote}
> However the actual behaviour is a bit more subtle:
> {quote}
> If the ACL spec contains only access entries, then the existing default 
> entries are retained. If the ACL spec contains only default entries, then the 
> existing access entries are retained. If the ACL spec contains both access 
> and default entries, then both are replaced.
> {quote}
> This Jira will improve the help text to more align with the expected 
> behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1123: HADOOP-16380 S3Guard to determine empty directory status for all non-root directories

2019-07-22 Thread GitBox
steveloughran commented on issue #1123: HADOOP-16380 S3Guard to determine empty 
directory status for all non-root directories
URL: https://github.com/apache/hadoop/pull/1123#issuecomment-513843313
 
 
   I'm going to say the terasort failures are unrelated: somehow teragen failed 
and the successors refuse to work at that point. 
   
   Regarding the other failure, it looks potentially like a bug in the initial 
delete code which tried to delete a file which was already missing when it 
looked; that's the one rethrown as we only throw the first failure on the loop. 
not the final one. I'll deal with those for some better checks and reporting


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16439) Upgrade bundled Tomcat in branch-2

2019-07-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890260#comment-16890260
 ] 

Hadoop QA commented on HADOOP-16439:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
24s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
21s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
45s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
9s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
38s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
49s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 2s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:da675796017 |
| JIRA Issue | HADOOP-16439 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12975256/HADOOP-16439-branch-2.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 7219bb933a45 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[GitHub] [hadoop] bshashikant commented on issue #1124: HDDS-1749 : Ozone Client should randomize the list of nodes in pipeli…

2019-07-22 Thread GitBox
bshashikant commented on issue #1124: HDDS-1749 : Ozone Client should randomize 
the list of nodes in pipeli…
URL: https://github.com/apache/hadoop/pull/1124#issuecomment-513842087
 
 
   +1 LGTM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16443) Improve help text for setfacl --set option

2019-07-22 Thread Stephen O'Donnell (JIRA)
Stephen O'Donnell created HADOOP-16443:
--

 Summary: Improve help text for setfacl --set option
 Key: HADOOP-16443
 URL: https://issues.apache.org/jira/browse/HADOOP-16443
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.3.0
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


The help text associated with the command "setfacl --set" states:

{quote}

 --set   Fully replace the ACL, discarding all existing entries. The    
   must include entries for user, group, and others for   
 
  compatibility with permission bits.   

{quote}

However the actual behaviour is a bit more subtle:

{quote}

If the ACL spec contains only access entries, then the existing default entries 
are retained. If the ACL spec contains only default entries, then the existing 
access entries are retained. If the ACL spec contains both access and default 
entries, then both are replaced.

{quote}

This Jira will improve the help text to more align with the expected behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1123: HADOOP-16380 S3Guard to determine empty directory status for all non-root directories

2019-07-22 Thread GitBox
bgaborg commented on issue #1123: HADOOP-16380 S3Guard to determine empty 
directory status for all non-root directories
URL: https://github.com/apache/hadoop/pull/1123#issuecomment-513832536
 
 
   -Dscale against ireland with dynamo: some test failed, so after the rerun:
   ```
   [INFO] Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
   [ERROR] Tests run: 9, Failures: 1, Errors: 2, Skipped: 0, Time elapsed: 
224.725 s <<< FAILURE! - in 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir
   [ERROR] 
testRecursiveRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)
  Time elapsed: 180.018 s  <<< ERROR!
   org.junit.runners.model.TestTimedOutException: test timed out after 18 
milliseconds
   
   [ERROR] 
testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)
  Time elapsed: 31.867 s  <<< FAILURE!
   java.lang.AssertionError:
   After 12 attempts: listing after rm /* not empty
   final [00] 
S3AFileStatus{path=s3a://gabota-versioned-bucket-ireland/tests3ascale; 
isDirectory=true; modification_time=0; access_time=0; owner=gaborbota; 
group=gaborbota; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null 
versionId=null
   
   deleted [00] 
S3AFileStatus{path=s3a://gabota-versioned-bucket-ireland/tests3ascale; 
isDirectory=true; modification_time=0; access_time=0; owner=gaborbota; 
group=gaborbota; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null 
versionId=null
   
   original [00] S3AFileStatus{path=s3a://gabota-versioned-bucket-ireland/test; 
isDirectory=true; modification_time=0; access_time=0; owner=gaborbota; 
group=gaborbota; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
isEncrypted=true; isErasureCoded=false} isEmptyDirectory=FALSE eTag=null 
versionId=null
   [01] S3AFileStatus{path=s3a://gabota-versioned-bucket-ireland/tests3ascale; 
isDirectory=true; modification_time=0; access_time=0; owner=gaborbota; 
group=gaborbota; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null 
versionId=null
   
   
   [ERROR] 
testListEmptyRootDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir)
  Time elapsed: 2.905 s  <<< ERROR!
   java.io.FileNotFoundException: about to be deleted file: not found 
s3a://gabota-versioned-bucket-ireland/tests3ascale in 
s3a://gabota-versioned-bucket-ireland/
at 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir.testListEmptyRootDirectory(ITestS3AContractRootDir.java:85)
   Caused by: java.io.FileNotFoundException: No such file or directory: 
s3a://gabota-versioned-bucket-ireland/tests3ascale
at 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir.testListEmptyRootDirectory(ITestS3AContractRootDir.java:85)
   
   [INFO] Running 
org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortMagicCommitter
   [ERROR] Tests run: 7, Failures: 1, Errors: 2, Skipped: 0, Time elapsed: 
67.798 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortMagicCommitter
   [ERROR] 
test_110_teragen(org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortMagicCommitter)
  Time elapsed: 16.361 s  <<< FAILURE!
   java.lang.AssertionError: Teragen(1000, 
s3a://gabota-versioned-bucket-ireland/terasort-ITestTerasortMagicCommitter/sortin)
 failed expected:<0> but was:<1>
   
   [ERROR] 
test_120_terasort(org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortMagicCommitter)
  Time elapsed: 2.044 s  <<< ERROR!
   java.io.FileNotFoundException: Output directory 
s3a://gabota-versioned-bucket-ireland/terasort-ITestTerasortMagicCommitter/sortin
 from previous teragen stage not found: Job may not have executed: not found 
s3a://gabota-versioned-bucket-ireland/terasort-ITestTerasortMagicCommitter/sortin
 in s3a://gabota-versioned-bucket-ireland/terasort-ITestTerasortMagicCommitter
   Caused by: java.io.FileNotFoundException: No such file or directory: 
s3a://gabota-versioned-bucket-ireland/terasort-ITestTerasortMagicCommitter/sortin
   
   [ERROR] 
test_130_teravalidate(org.apache.hadoop.fs.s3a.commit.terasort.ITestTerasortMagicCommitter)
  Time elapsed: 2.033 s  <<< ERROR!
   java.io.FileNotFoundException: Output directory 
s3a://gabota-versioned-bucket-ireland/terasort-ITestTerasortMagicCommitter/sortout
 from previous terasort stage not found: Job may not have executed: not found 
s3a://gabota-versioned-bucket-ireland/terasort-ITestTerasortMagicCommitter/sortout
 in s3a://gabota-versioned-bucket-ireland/terasort-ITestTerasortMagicCommitter
   Caused by: java.io.FileNotFoundException: No such file or directory: 
s3a://gabota-versioned-bucket-ireland/terasort-ITestTerasortMagicCommitter/sortout
   
   [INFO]
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   

[jira] [Comment Edited] (HADOOP-16431) Remove useless log in IOUtils.java and ExceptionDiags.java

2019-07-22 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890230#comment-16890230
 ] 

Lisheng Sun edited comment on HADOOP-16431 at 7/22/19 3:07 PM:
---

hi [~elgoiri] [~ayushtkn] [~xkrogen]. Could you have time to review this patch? 
Thank you.


was (Author: leosun08):
hi [~elgoiri] [~ayushtkn], Could you have time to review this patch? Thank you.

> Remove useless log in IOUtils.java and ExceptionDiags.java
> --
>
> Key: HADOOP-16431
> URL: https://issues.apache.org/jira/browse/HADOOP-16431
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16431.001.patch, HADOOP-16431.002.patch
>
>
> When there is no String Constructor for the exception, we Log a Warn Message, 
> and rethrow the exception. We can change the Log level to TRACE/DEBUG.
> {code:java}
> private static  T wrapWithMessage(
>   T exception, String msg) {
>   Class clazz = exception.getClass();
>   try {
> Constructor ctor =
>   clazz.getConstructor(String.class);
> Throwable t = ctor.newInstance(msg);
> return (T) (t.initCause(exception));
>   } catch (Throwable e) {
> LOG.trace("Unable to wrap exception of type " +
>  clazz + ": it has no (String) constructor", e);
> return exception;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16431) Remove useless log in IOUtils.java and ExceptionDiags.java

2019-07-22 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890230#comment-16890230
 ] 

Lisheng Sun commented on HADOOP-16431:
--

hi [~elgoiri] [~ayushtkn], Could you have time to review this patch? Thank you.

> Remove useless log in IOUtils.java and ExceptionDiags.java
> --
>
> Key: HADOOP-16431
> URL: https://issues.apache.org/jira/browse/HADOOP-16431
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16431.001.patch, HADOOP-16431.002.patch
>
>
> When there is no String Constructor for the exception, we Log a Warn Message, 
> and rethrow the exception. We can change the Log level to TRACE/DEBUG.
> {code:java}
> private static  T wrapWithMessage(
>   T exception, String msg) {
>   Class clazz = exception.getClass();
>   try {
> Constructor ctor =
>   clazz.getConstructor(String.class);
> Throwable t = ctor.newInstance(msg);
> return (T) (t.initCause(exception));
>   } catch (Throwable e) {
> LOG.trace("Unable to wrap exception of type " +
>  clazz + ": it has no (String) constructor", e);
> return exception;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1115: HADOOP-16207 testMR failures

2019-07-22 Thread GitBox
hadoop-yetus commented on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-513813503
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1058 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 39 | trunk passed |
   | +1 | shadedclient | 695 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 29 | trunk passed |
   | 0 | spotbugs | 63 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 62 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 9 new 
+ 5 unchanged - 1 fixed = 14 total (was 6) |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 763 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | the patch passed |
   | +1 | findbugs | 64 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 286 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3338 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.8 Server=18.09.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1115 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux fbd29dcac157 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / acdb0a1 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/6/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/6/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16442) Intermittent failure of ITestS3GuardToolDynamoDB#testDynamoDBInitDestroyCycle

2019-07-22 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890148#comment-16890148
 ] 

Steve Loughran commented on HADOOP-16442:
-

in HADOOP-15183 there's a change in S3GuardTool.destroy command; we map a 
timeout in waitForTable deletion to a new {{TableDeleteTimeoutException}} ( 
HADOOP-16364) , then in the tool catch and downgrade to a debug, on the basis 
that "it's just AWS taking its time"

Should our test be spinning a bit here for those test runs where AWS takes too 
long to delete a table?

> Intermittent failure of ITestS3GuardToolDynamoDB#testDynamoDBInitDestroyCycle
> -
>
> Key: HADOOP-16442
> URL: https://issues.apache.org/jira/browse/HADOOP-16442
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Priority: Minor
>
> ITestS3GuardToolDynamoDB#testDynamoDBInitDestroyCycle test is failing 
> itermittently against ireland with the following stacktrace:
> {noformat}
> [ERROR] Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 248.895 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
> [ERROR] 
> testDynamoDBInitDestroyCycle(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 141.461 s  <<< FAILURE!
> java.lang.AssertionError: s3guard.test.testDynamoDBInitDestroy912421434 still 
> exists
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDynamoDBInitDestroyCycle(ITestS3GuardToolDynamoDB.java:250)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1115: HADOOP-16207 testMR failures

2019-07-22 Thread GitBox
steveloughran commented on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-513781856
 
 
   Note that this test deletes four committer tests but only parameterizes 
three: directory, partitioned and magic. We don't do an explicit Staging 
committer, just its two subclasses. That's because those are the actual 
committers people are instructed to use, and we save one test run by cutting 
it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1115: HADOOP-16207 testMR failures

2019-07-22 Thread GitBox
steveloughran commented on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-513781359
 
 
   Tested: S3 Ireland. No failures on my test run. 
   
   I do hope this modified test run will pick up on any failures which have 
been happening on other PRs, e.g #1123, so we can then track down what the 
failure was.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1123: HADOOP-16380 S3Guard to determine empty directory status for all non-root directories

2019-07-22 Thread GitBox
steveloughran commented on issue #1123: HADOOP-16380 S3Guard to determine empty 
directory status for all non-root directories
URL: https://github.com/apache/hadoop/pull/1123#issuecomment-513780518
 
 
   @bgaborg , did that "1 approval"  mean a +1? If so, can you add it 
explicitly for the record. Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1123: HADOOP-16380 S3Guard to determine empty directory status for all non-root directories

2019-07-22 Thread GitBox
steveloughran commented on issue #1123: HADOOP-16380 S3Guard to determine empty 
directory status for all non-root directories
URL: https://github.com/apache/hadoop/pull/1123#issuecomment-513780088
 
 
   thanks. PR #1115 is working on the testMRJob failures; the initial patch 
will let us collect failures in the local FS; the test reports will also 
collect and print the result from the AM. and by working in the parallel phase 
again (yet still executing each MR job in an isolated sequence), we get a 
speedup of a minute or two.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16439) Upgrade bundled Tomcat in branch-2

2019-07-22 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16439:
-
Assignee: Masatake Iwasaki
  Status: Patch Available  (was: Open)

Thank you [~iwasakims]!
The change looks surprisingly small. Submit it for you and let's see what the 
precommit check says.

> Upgrade bundled Tomcat in branch-2
> --
>
> Key: HADOOP-16439
> URL: https://issues.apache.org/jira/browse/HADOOP-16439
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: httpfs, kms
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HADOOP-16439-branch-2.000.patch
>
>
> proposed by  [~jojochuang] in mailing list:
> {quote}We migrated from Tomcat to Jetty in Hadoop3, because Tomcat 6 went EOL 
> in
>  2016. But we did not realize three years after Tomcat 6's EOL, a majority
>  of Hadoop users are still in Hadoop 2, and it looks like Hadoop 2 will stay
>  alive for another few years.
> Backporting Jetty to Hadoop2 is probably too big of an imcompatibility.
>  How about migrating to Tomcat9?
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1123: HADOOP-16380 S3Guard to determine empty directory status for all non-root directories

2019-07-22 Thread GitBox
bgaborg commented on issue #1123: HADOOP-16380 S3Guard to determine empty 
directory status for all non-root directories
URL: https://github.com/apache/hadoop/pull/1123#issuecomment-513770130
 
 
   tested against ireland with dynamo: known testMRJob failures 
   scale tests are running with dynamo 
   will run localms tests as well and report back with the results.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16442) Intermittent failure of ITestS3GuardToolDynamoDB#testDynamoDBInitDestroyCycle

2019-07-22 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-16442:
---

 Summary: Intermittent failure of 
ITestS3GuardToolDynamoDB#testDynamoDBInitDestroyCycle
 Key: HADOOP-16442
 URL: https://issues.apache.org/jira/browse/HADOOP-16442
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Gabor Bota


ITestS3GuardToolDynamoDB#testDynamoDBInitDestroyCycle test is failing 
itermittently against ireland with the following stacktrace:

{noformat}
[ERROR] Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
248.895 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
[ERROR] 
testDynamoDBInitDestroyCycle(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
  Time elapsed: 141.461 s  <<< FAILURE!
java.lang.AssertionError: s3guard.test.testDynamoDBInitDestroy912421434 still 
exists
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDynamoDBInitDestroyCycle(ITestS3GuardToolDynamoDB.java:250)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16433) S3Guard: Filter expired entries and tombstones when listing with MetadataStore#listChildren

2019-07-22 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16433 started by Gabor Bota.
---
> S3Guard: Filter expired entries and tombstones when listing with 
> MetadataStore#listChildren
> ---
>
> Key: HADOOP-16433
> URL: https://issues.apache.org/jira/browse/HADOOP-16433
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Blocker
>
> Currently, we don't filter out entries in {{listChildren}} implementations.
> This can cause bugs and inconsistencies, so this should be fixed.
> It can lead to a status where we can't recover from the following:
> {{guarded and raw (OOB op) clients are doing ops to S3}}
> {noformat}
> Guarded: touch /
> Guarded: touch /
> Guarded: rm / {{-> tombstone in MS}}
> RAW: touch //file.ext {{-> file is hidden with a tombstone}}
> Guarded: ls / {{-> only  will show up in the listing. }}
> {noformat}
> After we change the following code
> {code:java}
>   final List metas = new ArrayList<>();
>   for (Item item : items) {
> DDBPathMetadata meta = itemToPathMetadata(item, username);
> metas.add(meta);
>   }
> {code}
> to 
> {code:java}
> // handle expiry - only add not expired entries to listing.
> if (meta.getLastUpdated() == 0 ||
> !meta.isExpired(ttlTimeProvider.getMetadataTtl(),
> ttlTimeProvider.getNow())) {
>   metas.add(meta);
> }
> {code}
> we will filter out expired entries from the listing, so we can recover form 
> these kind of OOB ops.
> Note:  we have to handle the lastUpdated == 0 case, where the lastUpdated 
> field is not filled in!
> Note: this can only be fixed cleanly after HADOOP-16383 is fixed because we 
> need to have the TTLtimeProvider in MS to handle this internally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1123: HADOOP-16380 S3Guard to determine empty directory status for all non-root directories

2019-07-22 Thread GitBox
bgaborg commented on a change in pull request #1123: HADOOP-16380 S3Guard to 
determine empty directory status for all non-root directories
URL: https://github.com/apache/hadoop/pull/1123#discussion_r305812343
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
 ##
 @@ -818,6 +828,22 @@ public static void callQuietly(final Logger log,
 return null;
   }
 
+  /**
+   * Get a file status from S3A with the {@code needEmptyDirectoryFlag}
+   * state probed.
+   * This accesses a package-private method in the
+   * S3A filesystem.
+   * @param fs filesystem
+   * @param dir directory
+   * @return a status
+   * @throws IOException
+   */
+  public static S3AFileStatus getStatusWithEmptyDirFlag(
 
 Review comment:
   just to remind me that I have a similar method in pr so I have to rebase: 
https://github.com/apache/hadoop/pull/1134/files#diff-d946bda31a9bf644846f57b45a89524dR831


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1123: HADOOP-16380 S3Guard to determine empty directory status for all non-root directories

2019-07-22 Thread GitBox
bgaborg commented on a change in pull request #1123: HADOOP-16380 S3Guard to 
determine empty directory status for all non-root directories
URL: https://github.com/apache/hadoop/pull/1123#discussion_r305809514
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -2623,7 +2609,8 @@ S3AFileStatus innerGetFileStatus(final Path f,
 // Check MetadataStore, if any.
 PathMetadata pm = null;
 if (hasMetadataStore()) {
-  pm = S3Guard.getWithTtl(metadataStore, path, ttlTimeProvider);
+  pm = S3Guard.getWithTtl(metadataStore, path, ttlTimeProvider,
+  needEmptyDirectoryFlag);
 
 Review comment:
   Good that you found this and before shipping it to a customer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #1113: HDDS-1798. Propagate failure in writeStateMachineData to Ratis. Contributed by Supratim Deka

2019-07-22 Thread GitBox
mukul1987 commented on a change in pull request #1113: HDDS-1798. Propagate 
failure in writeStateMachineData to Ratis. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/1113#discussion_r304777905
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -402,15 +411,28 @@ private ExecutorService getCommandExecutor(
 // Remove the future once it finishes execution from the
 // writeChunkFutureMap.
 writeChunkFuture.thenApply(r -> {
-  metrics.incNumBytesWrittenCount(
-  requestProto.getWriteChunk().getChunkData().getLen());
+  if (r.getResult() != ContainerProtos.Result.SUCCESS) {
+StorageContainerException sce =
 
 Review comment:
   Lets add a metric for this failure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org