[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896799#comment-16896799
 ] 

Hadoop QA commented on HADOOP-16152:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  8m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 59s{color} 
| {color:red} root generated 2 new + 1479 unchanged - 0 fixed = 1481 total (was 
1479) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m 
20s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16152 |
| JIRA Patch URL | 

[GitHub] [hadoop] bharatviswa504 commented on issue #1181: HDDS-1849. Implement S3 Complete MPU request to use Cache and DoubleBuffer.

2019-07-30 Thread GitBox
bharatviswa504 commented on issue #1181: HDDS-1849. Implement S3 Complete MPU 
request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1181#issuecomment-516705831
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
bharatviswa504 commented on issue #1174: HDDS-1856. Make required changes for 
Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#issuecomment-516703796
 
 
   Opened a jira HDDS-1872 for failure related to 
TestS3MultipartUploadAbortResponse.
   Rest of the test failures are not related to this patch.
   Thank You @arp7 for the review. I will commit this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
bharatviswa504 merged pull request #1174: HDDS-1856. Make required changes for 
Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
hadoop-yetus commented on issue #1174: HDDS-1856. Make required changes for 
Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#issuecomment-516700673
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 82 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 717 | trunk passed |
   | +1 | compile | 374 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 889 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 413 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 603 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 559 | the patch passed |
   | +1 | compile | 364 | the patch passed |
   | +1 | javac | 364 | the patch passed |
   | +1 | checkstyle | 81 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 640 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | the patch passed |
   | +1 | findbugs | 631 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 350 | hadoop-hdds in the patch failed. |
   | -1 | unit | 3430 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 9290 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.container.common.transport.server.ratis.TestCSMMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1174 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ab84367207b3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0f2dad6 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/3/testReport/ |
   | Max. process+thread count | 3684 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
hadoop-yetus commented on issue #1174: HDDS-1856. Make required changes for 
Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#issuecomment-516694943
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 625 | trunk passed |
   | +1 | compile | 363 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 848 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 419 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 614 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 561 | the patch passed |
   | +1 | compile | 352 | the patch passed |
   | +1 | javac | 352 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 617 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | +1 | findbugs | 686 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 194 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1915 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 7473 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1174 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4bbc9ab239dc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0f2dad6 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/4/testReport/ |
   | Max. process+thread count | 3749 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1174/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15397) Failed to start the estimator of Resource Estimator Service

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896765#comment-16896765
 ] 

Hadoop QA commented on HADOOP-15397:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 12m 
14s{color} | {color:red} Docker failed to build yetus/hadoop:17213a0. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15397 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919600/HADOOP-15397-branch-2.9.0.003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16434/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Failed to start the estimator of Resource Estimator Service
> ---
>
> Key: HADOOP-15397
> URL: https://issues.apache.org/jira/browse/HADOOP-15397
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.9.0
>Reporter: zhangbutao
>Assignee: zhangbutao
>Priority: Major
> Fix For: 2.9.0
>
> Attachments: HADOOP-15397-001.path, 
> HADOOP-15397-branch-2.9.0.003.patch, HADOOP-15397.002.patch
>
>
> You would get the following log, if you start the estmator using  script  
> start-estimator.sh;. And the estmator is not started.
> {code:java}
> starting resource estimator service
> starting estimator, logging to 
> /hadoop/share/hadoop/tools/resourceestimator/bin/../../../../../logs/hadoop-resourceestimator.out
> /hadoop/share/hadoop/tools/resourceestimator/bin/estimator-daemon.sh: line 
> 47: bin/estimator.sh: No such file or directory{code}
> Fix the bug in the script estimator-daemon.sh.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1188: HDDS-1875. Fix failures in TestS3MultipartUploadAbortResponse.

2019-07-30 Thread GitBox
bharatviswa504 opened a new pull request #1188: HDDS-1875. Fix failures in 
TestS3MultipartUploadAbortResponse.
URL: https://github.com/apache/hadoop/pull/1188
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15956) Use relative resource URLs across WebUI components

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896759#comment-16896759
 ] 

Hadoop QA commented on HADOOP-15956:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-15956 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15956 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953263/HADOOP-15956.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16433/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Use relative resource URLs across WebUI components
> --
>
> Key: HADOOP-15956
> URL: https://issues.apache.org/jira/browse/HADOOP-15956
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: HADOOP-15956.001.patch
>
>
> Similar to HDFS-12961 there are absolute paths used for static resources in 
> the WebUI for HDFS & KMS which can cause issues when attempting to access 
> these pages via a reverse proxy. Using relative paths in all WebUI components 
> will allow pages to render properly when using a reverse proxy.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl opened a new pull request #1187: HDDS-1829 On OM reload/restart OmMetrics#numKeys should be updated

2019-07-30 Thread GitBox
smengcl opened a new pull request #1187: HDDS-1829 On OM reload/restart 
OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1187
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on issue #1164: HDDS-1829 On OM reload/restart OmMetrics#numKeys should be updated

2019-07-30 Thread GitBox
smengcl commented on issue #1164: HDDS-1829 On OM reload/restart 
OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1164#issuecomment-516678395
 
 
   New PR with checkstyle fix: https://github.com/apache/hadoop/pull/1187
   
   Pending CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on issue #1164: HDDS-1829 On OM reload/restart OmMetrics#numKeys should be updated

2019-07-30 Thread GitBox
smengcl commented on issue #1164: HDDS-1829 On OM reload/restart 
OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1164#issuecomment-516678138
 
 
   @bharatviswa504 Sure.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16267) Performance gain if you use replace() instead of replaceAll() for replacing patterns that do not use a regex

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896726#comment-16896726
 ] 

Hadoop QA commented on HADOOP-16267:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-16267 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16267 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966565/HADOOP-16267.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16431/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Performance gain if you use replace() instead of replaceAll() for replacing 
> patterns that do not use a regex 
> -
>
> Key: HADOOP-16267
> URL: https://issues.apache.org/jira/browse/HADOOP-16267
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: bd2019us
>Assignee: bd2019us
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HADOOP-16267.patch
>
>
> Performance gain if you use replace() instead of replaceAll() for replacing 
> patterns that do not use a regex. This happens because replace() does not 
> need to compile the regex pattern like replaceAll() does.
> Affected files:
>  * 
> hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
>  * 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java
>  * 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/PrintJarMainClass.java



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] wuzhilon commented on issue #330: YARN PROXYSERVER throw IOEXCEPTION

2019-07-30 Thread GitBox
wuzhilon commented on issue #330: YARN PROXYSERVER throw IOEXCEPTION
URL: https://github.com/apache/hadoop/pull/330#issuecomment-516672425
 
 
   this is hadoop 2.6.0 chanage,not hadoop-2.7.4 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12282) Connection thread's name should be updated after address changing is detected

2019-07-30 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896710#comment-16896710
 ] 

Lisheng Sun commented on HADOOP-12282:
--

hi [~ayushtkn] [~aajisaka] [~jojochuang] [~hexiaoqiao] Could you have time to 
review this patch? Thank you.

> Connection thread's name should be updated after address changing is detected
> -
>
> Key: HADOOP-12282
> URL: https://issues.apache.org/jira/browse/HADOOP-12282
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: zhouyingchao
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-12282-001.patch, HADOOP-12282.002.patch
>
>
> In a hadoop hdfs cluster, I changed the standby Namenode's ip address (the 
> hostname is not changed and the routing tables are updated). After the 
> change, the cluster is running as normal.
>  However, I found that the debug message of datanode's IPC still prints the 
> original ip address. By looking into the implementation, it turns out that 
> the original address is used as the thread's name. I think the thread's name 
> should be changed if the address change is detected.  Because one of the 
> constituent elements of the thread's name is server.
> {code:java}
> Connection(ConnectionId remoteId, int serviceClass,
> Consumer removeMethod) {
> ..
> UserGroupInformation ticket = remoteId.getTicket();
> // try SASL if security is enabled or if the ugi contains tokens.
> // this causes a SIMPLE client with tokens to attempt SASL
> boolean trySasl = UserGroupInformation.isSecurityEnabled() ||
>   (ticket != null && !ticket.getTokens().isEmpty());
> this.authProtocol = trySasl ? AuthProtocol.SASL : AuthProtocol.NONE;
> this.setName("IPC Client (" + socketFactory.hashCode() +") connection to " +
> server.toString() +
> " from " + ((ticket==null)?"an unknown user":ticket.getUserName()));
> this.setDaemon(true);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15956) Use relative resource URLs across WebUI components

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-15956:


Assignee: Greg Phillips

> Use relative resource URLs across WebUI components
> --
>
> Key: HADOOP-15956
> URL: https://issues.apache.org/jira/browse/HADOOP-15956
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Trivial
> Attachments: HADOOP-15956.001.patch
>
>
> Similar to HDFS-12961 there are absolute paths used for static resources in 
> the WebUI for HDFS & KMS which can cause issues when attempting to access 
> these pages via a reverse proxy. Using relative paths in all WebUI components 
> will allow pages to render properly when using a reverse proxy.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15956) Use relative resource URLs across WebUI components

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15956:
-
Priority: Minor  (was: Trivial)

> Use relative resource URLs across WebUI components
> --
>
> Key: HADOOP-15956
> URL: https://issues.apache.org/jira/browse/HADOOP-15956
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: HADOOP-15956.001.patch
>
>
> Similar to HDFS-12961 there are absolute paths used for static resources in 
> the WebUI for HDFS & KMS which can cause issues when attempting to access 
> these pages via a reverse proxy. Using relative paths in all WebUI components 
> will allow pages to render properly when using a reverse proxy.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-16152:


Assignee: Yuming Wang

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16267) Performance gain if you use replace() instead of replaceAll() for replacing patterns that do not use a regex

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-16267:


Assignee: bd2019us

> Performance gain if you use replace() instead of replaceAll() for replacing 
> patterns that do not use a regex 
> -
>
> Key: HADOOP-16267
> URL: https://issues.apache.org/jira/browse/HADOOP-16267
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: bd2019us
>Assignee: bd2019us
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HADOOP-16267.patch
>
>
> Performance gain if you use replace() instead of replaceAll() for replacing 
> patterns that do not use a regex. This happens because replace() does not 
> need to compile the regex pattern like replaceAll() does.
> Affected files:
>  * 
> hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
>  * 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java
>  * 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/PrintJarMainClass.java



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15397) Failed to start the estimator of Resource Estimator Service

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-15397:


Assignee: zhangbutao

> Failed to start the estimator of Resource Estimator Service
> ---
>
> Key: HADOOP-15397
> URL: https://issues.apache.org/jira/browse/HADOOP-15397
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.9.0
>Reporter: zhangbutao
>Assignee: zhangbutao
>Priority: Major
> Fix For: 2.9.0
>
> Attachments: HADOOP-15397-001.path, 
> HADOOP-15397-branch-2.9.0.003.patch, HADOOP-15397.002.patch
>
>
> You would get the following log, if you start the estmator using  script  
> start-estimator.sh;. And the estmator is not started.
> {code:java}
> starting resource estimator service
> starting estimator, logging to 
> /hadoop/share/hadoop/tools/resourceestimator/bin/../../../../../logs/hadoop-resourceestimator.out
> /hadoop/share/hadoop/tools/resourceestimator/bin/estimator-daemon.sh: line 
> 47: bin/estimator.sh: No such file or directory{code}
> Fix the bug in the script estimator-daemon.sh.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896698#comment-16896698
 ] 

Wei-Chiu Chuang commented on HADOOP-15565:
--

This is the same as HDFS-14645.

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-15565.0001.patch, HADOOP-15565.0002.patch
>
>
> When we create a ViewFileSystem, all it's child filesystems will be cached by 
> FileSystem.CACHE. Unless we close these child filesystems, they will stay in 
> FileSystem.CACHE forever.
> I think we should let FileSystem.CACHE cache ViewFileSystem only, and let 
> ViewFileSystem cache all it's child filesystems. So we can close 
> ViewFileSystem without leak and won't affect other ViewFileSystems.
> I find this problem because i need to re-login my kerberos and renew 
> ViewFileSystem periodically. Because FileSystem.CACHE.Key is based on 
> UserGroupInformation, which changes everytime i re-login, I can't use the 
> cached child filesystems when i new a ViewFileSystem. And because 
> ViewFileSystem.close does nothing but remove itself from cache, i leak all 
> it's child filesystems in cache.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-15565:


Assignee: Jinglun

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-15565.0001.patch, HADOOP-15565.0002.patch
>
>
> When we create a ViewFileSystem, all it's child filesystems will be cached by 
> FileSystem.CACHE. Unless we close these child filesystems, they will stay in 
> FileSystem.CACHE forever.
> I think we should let FileSystem.CACHE cache ViewFileSystem only, and let 
> ViewFileSystem cache all it's child filesystems. So we can close 
> ViewFileSystem without leak and won't affect other ViewFileSystems.
> I find this problem because i need to re-login my kerberos and renew 
> ViewFileSystem periodically. Because FileSystem.CACHE.Key is based on 
> UserGroupInformation, which changes everytime i re-login, I can't use the 
> cached child filesystems when i new a ViewFileSystem. And because 
> ViewFileSystem.close does nothing but remove itself from cache, i leak all 
> it's child filesystems in cache.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
bharatviswa504 commented on a change in pull request #1174: HDDS-1856. Make 
required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r309011500
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/s3/bucket/TestS3BucketRequest.java
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.AuditMessage;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.when;
+
+/**
+ * Base test class for S3 Bucket request.
+ */
+@SuppressWarnings("visibilityModifier")
+public class TestS3BucketRequest {
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  protected OzoneManager ozoneManager;
+  protected OMMetrics omMetrics;
+  protected OMMetadataManager omMetadataManager;
+  protected AuditLogger auditLogger;
+
+  // Just setting ozoneManagerDoubleBuffer which does no
 
 Review comment:
   Yes, thanks for catching it. Fixed it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
bharatviswa504 commented on a change in pull request #1174: HDDS-1856. Make 
required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r309011544
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/TestOMVolumeRequest.java
 ##
 @@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.AuditMessage;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.when;
+
+/**
+ * Base test class for Volume request.
+ */
+@SuppressWarnings("visibilitymodifier")
+public class TestOMVolumeRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  protected OzoneManager ozoneManager;
+  protected OMMetrics omMetrics;
+  protected OMMetadataManager omMetadataManager;
+  protected AuditLogger auditLogger;
+  // Just setting ozoneManagerDoubleBuffer which does no
 
 Review comment:
   Fixed it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
bharatviswa504 commented on a change in pull request #1174: HDDS-1856. Make 
required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r309011529
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/key/TestOMKeyRequest.java
 ##
 @@ -82,6 +83,12 @@
   protected long scmBlockSize = 1000L;
   protected long dataSize;
 
+  // Just setting ozoneManagerDoubleBuffer which does no
 
 Review comment:
   fixed it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
bharatviswa504 commented on a change in pull request #1174: HDDS-1856. Make 
required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r309008096
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -61,6 +63,10 @@
   private Queue> currentBuffer;
   private Queue> readyBuffer;
 
+
+  private Queue> currentFutureQueue;
 
 Review comment:
   Opened jira for this.
   https://issues.apache.org/jira/browse/HDDS-1874


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
arp7 commented on a change in pull request #1174: HDDS-1856. Make required 
changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308998671
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -248,10 +291,20 @@ public long getFlushIterations() {
* @param response
* @param transactionIndex
*/
-  public synchronized void add(OMClientResponse response,
+  public synchronized CompletableFuture add(OMClientResponse response,
 
 Review comment:
   No that's fine. Leave it as it is.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1147: HDDS-1619. Support volume addACL operations for OM HA. Contributed by…

2019-07-30 Thread GitBox
hadoop-yetus commented on issue #1147: HDDS-1619. Support volume addACL 
operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-516646998
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 609 | trunk passed |
   | +1 | compile | 351 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 799 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 418 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 614 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 535 | the patch passed |
   | +1 | compile | 353 | the patch passed |
   | +1 | javac | 353 | the patch passed |
   | +1 | checkstyle | 69 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 646 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | +1 | findbugs | 677 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 286 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2603 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8165 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1147 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ae40e9d71f97 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7849bdc |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/4/testReport/ |
   | Max. process+thread count | 3852 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
bharatviswa504 commented on a change in pull request #1174: HDDS-1856. Make 
required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308989717
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -248,10 +291,20 @@ public long getFlushIterations() {
* @param response
* @param transactionIndex
*/
-  public synchronized void add(OMClientResponse response,
+  public synchronized CompletableFuture add(OMClientResponse response,
 
 Review comment:
   This is for a temporary thing. Sooner, all OM uses HA, so this code will be 
removed in later point of time. And in HA case, we don't even use future. So, I 
think this should be okay, let me know you still want to use Optional here?
   
   So, in code if you see where future will be used, we don't need != null 
check.
   https://github.com/apache/hadoop/pull/1166/files in 
OzoneManagerProtocolServerSideTranslatorPB.java


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
bharatviswa504 commented on a change in pull request #1174: HDDS-1856. Make 
required changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308989717
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -248,10 +291,20 @@ public long getFlushIterations() {
* @param response
* @param transactionIndex
*/
-  public synchronized void add(OMClientResponse response,
+  public synchronized CompletableFuture add(OMClientResponse response,
 
 Review comment:
   This is for a temporary thing. Sooner, all OM uses HA, so this code will be 
removed in later point of time. And in HA case, we don't even use future. So, I 
think this should be okay, let me know you still want to use Optional here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-07-30 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896628#comment-16896628
 ] 

CR Hota commented on HADOOP-16268:
--

[~jojochuang] Thanks for assigning this. Yes I will. Have been stuck in some 
internal things and HDFS-14090 completion first.

> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HADOOP-16268.001.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896620#comment-16896620
 ] 

Wei-Chiu Chuang commented on HADOOP-16268:
--

[~crh] assigned the jira to you. Would you address [~xkrogen]'s comments? Thank 
you!

> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HADOOP-16268.001.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-16268:


Assignee: CR Hota

> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HADOOP-16268.001.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1167: HDDS-1863. Freon RandomKeyGenerator even if keySize is set to 0, it returns some random data to key.

2019-07-30 Thread GitBox
xiaoyuyao commented on a change in pull request #1167: HDDS-1863. Freon 
RandomKeyGenerator even if keySize is set to 0, it returns some random data to 
key.
URL: https://github.com/apache/hadoop/pull/1167#discussion_r308983465
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -263,9 +262,7 @@ public Void call() throws Exception {
 // Compute the common initial digest for all keys without their UUID
 if (validateWrites) {
   commonInitialMD = DigestUtils.getDigest(DIGEST_ALGORITHM);
-  int uuidLength = UUID.randomUUID().toString().length();
-  keySize = Math.max(uuidLength, keySize);
-  for (long nrRemaining = keySize - uuidLength; nrRemaining > 0;
+  for (long nrRemaining = keySize; nrRemaining > 0;
 
 Review comment:
   should we add some parameter check before processing nrRemaining?
   E.g., keySize > 0, keySize < bufferSize, otherwise, if someone specify 
keySize < 0 or keySize< bufferSize, the nrRemaining and curSize can still be 
possible be negative. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1167: HDDS-1863. Freon RandomKeyGenerator even if keySize is set to 0, it returns some random data to key.

2019-07-30 Thread GitBox
xiaoyuyao commented on a change in pull request #1167: HDDS-1863. Freon 
RandomKeyGenerator even if keySize is set to 0, it returns some random data to 
key.
URL: https://github.com/apache/hadoop/pull/1167#discussion_r308983465
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -263,9 +262,7 @@ public Void call() throws Exception {
 // Compute the common initial digest for all keys without their UUID
 if (validateWrites) {
   commonInitialMD = DigestUtils.getDigest(DIGEST_ALGORITHM);
-  int uuidLength = UUID.randomUUID().toString().length();
-  keySize = Math.max(uuidLength, keySize);
-  for (long nrRemaining = keySize - uuidLength; nrRemaining > 0;
+  for (long nrRemaining = keySize; nrRemaining > 0;
 
 Review comment:
   should we add some parameter check before processing nrRemaining?
   E.g., keySize > 0, keySize > bufferSize, otherwise, if someone specify 
keySize < 0 or keySize< bufferSize, the nrRemaining and curSize can still be 
possible be negative. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1167: HDDS-1863. Freon RandomKeyGenerator even if keySize is set to 0, it returns some random data to key.

2019-07-30 Thread GitBox
xiaoyuyao commented on a change in pull request #1167: HDDS-1863. Freon 
RandomKeyGenerator even if keySize is set to 0, it returns some random data to 
key.
URL: https://github.com/apache/hadoop/pull/1167#discussion_r308983465
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -263,9 +262,7 @@ public Void call() throws Exception {
 // Compute the common initial digest for all keys without their UUID
 if (validateWrites) {
   commonInitialMD = DigestUtils.getDigest(DIGEST_ALGORITHM);
-  int uuidLength = UUID.randomUUID().toString().length();
-  keySize = Math.max(uuidLength, keySize);
-  for (long nrRemaining = keySize - uuidLength; nrRemaining > 0;
+  for (long nrRemaining = keySize; nrRemaining > 0;
 
 Review comment:
   should we add some parameter check before processing nrRemaining?
   E.g., keySize > 0, keySize > bufferSize, otherwise, if someone specify 
keySize < 0 or keySize< bufferSize, the nrRemaining and curSize can still be 
possible be negative. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
arp7 commented on issue #1174: HDDS-1856. Make required changes for Non-HA to 
use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#issuecomment-516615672
 
 
   Basically +1 from me. A few minor comments.
   
   I thought about the potential synchronization issue @anuengineer pointed out 
offline and am not convinced it exists. Because we hold/release the lock when 
flushTransactions calls setReadyBuffer. So that is the synchronization point. 
All later access from the same thread should see the correct value of the 
futureQueue.
   
   Couple of ways remove the ambiguity:
   1. Make the queues volatile.
   1. Return the queue from setReadyBuffer. Since the returned pointer was 
sampled with the lock held, the caller is guaranteed to see the correct value. 
This solution should also make findbugs happy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16459) Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the RPC layer" to branch-2

2019-07-30 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896554#comment-16896554
 ] 

Erik Krogen commented on HADOOP-16459:
--

I noticed that with HDFS-12943 in branch-3.0 (as part of HDFS-14573), it can be 
a direct cherry pick. So I think it will make everyone's life easier if I wait 
until HDFS-14204 is completed to backport HDFS-12943 to branch-2.

I committed to branch-3.2, branch-3.1 and branch-3.0 for now.

> Backport [HADOOP-16266] "Add more fine-grained processing time metrics to the 
> RPC layer" to branch-2
> 
>
> Key: HADOOP-16459
> URL: https://issues.apache.org/jira/browse/HADOOP-16459
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16266-branch-2.000.patch, 
> HADOOP-16266-branch-2.001.patch, HADOOP-16266-branch-3.0.000.patch, 
> HADOOP-16266-branch-3.1.000.patch, HADOOP-16266-branch-3.2.000.patch
>
>
> We would like to target pulling HADOOP-16266, an important operability 
> enhancement and prerequisite for HDFS-14403, into branch-2.
> It's only present in trunk now so we also need to backport through the 3.x 
> lines.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16266) Add more fine-grained processing time metrics to the RPC layer

2019-07-30 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16266:
-
Fix Version/s: 3.1.3
   3.2.1
   3.0.4

> Add more fine-grained processing time metrics to the RPC layer
> --
>
> Key: HADOOP-16266
> URL: https://issues.apache.org/jira/browse/HADOOP-16266
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Christopher Gregorian
>Assignee: Erik Krogen
>Priority: Minor
>  Labels: rpc
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16266.001.patch, HADOOP-16266.002.patch, 
> HADOOP-16266.003.patch, HADOOP-16266.004.patch, HADOOP-16266.005.patch, 
> HADOOP-16266.006.patch, HADOOP-16266.007.patch, HADOOP-16266.008.patch, 
> HADOOP-16266.009.patch, HADOOP-16266.010.patch, 
> HADOOP-16266.011-followon.patch, HADOOP-16266.011.patch
>
>
> Splitting off of HDFS-14403 to track the first part: introduces more 
> fine-grained measuring of how a call's processing time is split up.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12282) Connection thread's name should be updated after address changing is detected

2019-07-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896546#comment-16896546
 ] 

Hadoop QA commented on HADOOP-12282:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
30s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  2m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-12282 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976237/HADOOP-12282.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 16f3c95cefd2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 42683ae |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16429/testReport/ |
| Max. process+thread count | 1387 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16429/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Connection thread's name should be 

[GitHub] [hadoop] aajisaka commented on a change in pull request #1170: HADOOP-16398. Exports Hadoop metrics to Prometheus

2019-07-30 Thread GitBox
aajisaka commented on a change in pull request #1170: HADOOP-16398. Exports 
Hadoop metrics to Prometheus
URL: https://github.com/apache/hadoop/pull/1170#discussion_r308955056
 
 

 ##
 File path: 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
 ##
 @@ -70,6 +70,12 @@ public BaseHttpServer(Configuration conf, String name) 
throws IOException {
   this.httpAddress = getHttpBindAddress();
   this.httpsAddress = getHttpsBindAddress();
   HttpServer2.Builder builder = null;
+
+  // Avoid registering o.a.h.http.PrometheusServlet in HttpServer2.
+  // TODO: Replace "hadoop.prometheus.endpoint.enabled" with
+  // CommonConfigurationKeysPublic.HADOOP_PROMETHEUS_ENABLED when possible.
+  conf.setBoolean("hadoop.prometheus.endpoint.enabled", false);
+
 
 Review comment:
   Thanks. I think your understanding is correct.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16112) Delete the baseTrashPath's subDir leads to don't modify baseTrashPath

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-16112:


Assignee: Lisheng Sun

> Delete the baseTrashPath's subDir leads to don't modify baseTrashPath
> -
>
> Key: HADOOP-16112
> URL: https://issues.apache.org/jira/browse/HADOOP-16112
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16112.001.patch, HADOOP-16112.002.patch
>
>
> There is race condition in TrashPolicyDefault#moveToTrash
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
arp7 commented on a change in pull request #1174: HDDS-1856. Make required 
changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308950572
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -248,10 +291,20 @@ public long getFlushIterations() {
* @param response
* @param transactionIndex
*/
-  public synchronized void add(OMClientResponse response,
+  public synchronized CompletableFuture add(OMClientResponse response,
 
 Review comment:
   Can you return Optional> instead? We should avoid 
having nulls in code wherever possible.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16475) high number of calls to UserGroupInformation loginUserFromSubject creates huge number of ticket renewal thread which leads to Out of Memory

2019-07-30 Thread Uday Kiran Reddy (JIRA)
Uday Kiran Reddy created HADOOP-16475:
-

 Summary: high number of calls to UserGroupInformation 
loginUserFromSubject creates huge number of ticket renewal thread which leads 
to Out of Memory
 Key: HADOOP-16475
 URL: https://issues.apache.org/jira/browse/HADOOP-16475
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 2.8.5
 Environment: This issue has been observed all cloudera and hortonworks 
hadoop-common environments.
Reporter: Uday Kiran Reddy
 Fix For: 2.6.6, 2.8.4, 2.8.3
 Attachments: chart4.png

We have an application which makes call to UserGroupInformation 
loginUserFromSubject multiple times depending on user requests. Each call is 
leading to create a new kerberos auto renewal thread (deamon) creation which 
holds ticket, configuration objects. This leads to constant memory growth of 
java process which eventually leads to process gets killed (out of memory). 

 

static void loginUserFromSubject(Subject subject)

{

..

loginUser.spawnAutoRenewalThreadForUserCreds(); --> this is created new auto 
renewal threads.

 

}

I think this is bug which needs to be fixed in hadoop common side. We should 
not created auto renewal thread for each request. Either give auto renewal 
process to the caller or make sure that there shouldn't be duplicate auto 
renewal for same user/subject. We are currently in a situation that we have 
huge number of threads but couldn't delete them or stop memory growth without 
restarting the services.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
arp7 commented on a change in pull request #1174: HDDS-1856. Make required 
changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308948890
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -61,6 +63,10 @@
   private Queue> currentBuffer;
   private Queue> readyBuffer;
 
+
+  private Queue> currentFutureQueue;
 
 Review comment:
   Can you add a one-line comment for the new fields? 
   
   Also we should probably add for the existing fields. I think some ASCII art 
description of how double buffer works will be helpful to future maintainers. 
However it's okay to file a follow up jira and do separately later. Don't need 
to do it for this commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
arp7 commented on a change in pull request #1174: HDDS-1856. Make required 
changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308946911
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/TestOMVolumeRequest.java
 ##
 @@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.AuditMessage;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.when;
+
+/**
+ * Base test class for Volume request.
+ */
+@SuppressWarnings("visibilitymodifier")
+public class TestOMVolumeRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  protected OzoneManager ozoneManager;
+  protected OMMetrics omMetrics;
+  protected OMMetadataManager omMetadataManager;
+  protected AuditLogger auditLogger;
+  // Just setting ozoneManagerDoubleBuffer which does no
 
 Review comment:
   same


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
arp7 commented on a change in pull request #1174: HDDS-1856. Make required 
changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308946467
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/key/TestOMKeyRequest.java
 ##
 @@ -82,6 +83,12 @@
   protected long scmBlockSize = 1000L;
   protected long dataSize;
 
+  // Just setting ozoneManagerDoubleBuffer which does no
 
 Review comment:
   incomplete comment?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
arp7 commented on a change in pull request #1174: HDDS-1856. Make required 
changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308946688
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/s3/multipart/TestS3MultipartRequest.java
 ##
 @@ -56,6 +57,12 @@
   protected OMMetadataManager omMetadataManager;
   protected AuditLogger auditLogger;
 
+  // Just setting ozoneManagerDoubleBuffer which does no
 
 Review comment:
   same..


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
arp7 commented on a change in pull request #1174: HDDS-1856. Make required 
changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308946254
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/s3/bucket/TestS3BucketRequest.java
 ##
 @@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.AuditMessage;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.when;
+
+/**
+ * Base test class for S3 Bucket request.
+ */
+@SuppressWarnings("visibilityModifier")
+public class TestS3BucketRequest {
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  protected OzoneManager ozoneManager;
+  protected OMMetrics omMetrics;
+  protected OMMetadataManager omMetadataManager;
+  protected AuditLogger auditLogger;
+
+  // Just setting ozoneManagerDoubleBuffer which does no
 
 Review comment:
   incomplete comment?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao merged pull request #1171: HDDS-1834. parent directories not found in secure setup due to ACL check

2019-07-30 Thread GitBox
xiaoyuyao merged pull request #1171: HDDS-1834. parent directories not found in 
secure setup due to ACL check
URL: https://github.com/apache/hadoop/pull/1171
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1171: HDDS-1834. parent directories not found in secure setup due to ACL check

2019-07-30 Thread GitBox
xiaoyuyao commented on issue #1171: HDDS-1834. parent directories not found in 
secure setup due to ACL check
URL: https://github.com/apache/hadoop/pull/1171#issuecomment-516586868
 
 
   @lokeshj1703 getAcl, setAcl, addAcl and removeAcl inside KeyManagerImpl are 
called after OzoneManager#checkAcls() when acl is enabled. So this fix should 
address the original issue when acl is enabled. 
   When acl is not enabled, getAcl, setAcl, addAcl and removeAcl should not be 
allowed. So I think we may need a separate JIRA to block those operations. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1185: HADOOP-16470 IAMInstanceCredentialsProvider to use EC2ContainerCredentialsProviderWrapper

2019-07-30 Thread GitBox
hadoop-yetus commented on issue #1185: HADOOP-16470 
IAMInstanceCredentialsProvider to use EC2ContainerCredentialsProviderWrapper
URL: https://github.com/apache/hadoop/pull/1185#issuecomment-516585950
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 72 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1175 | trunk passed |
   | +1 | compile | 1141 | trunk passed |
   | +1 | checkstyle | 147 | trunk passed |
   | +1 | mvnsite | 121 | trunk passed |
   | +1 | shadedclient | 958 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 93 | trunk passed |
   | 0 | spotbugs | 67 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | -1 | findbugs | 65 | hadoop-tools/hadoop-aws in trunk has 1 extant 
findbugs warnings. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 80 | the patch passed |
   | +1 | compile | 1053 | the patch passed |
   | +1 | javac | 1053 | the patch passed |
   | -0 | checkstyle | 142 | root: The patch generated 2 new + 0 unchanged - 0 
fixed = 2 total (was 0) |
   | +1 | mvnsite | 117 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 662 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 99 | the patch passed |
   | +1 | findbugs | 202 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 554 | hadoop-common in the patch passed. |
   | +1 | unit | 293 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7230 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1185/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1185 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 78484c0bc297 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c75f16d |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1185/1/artifact/out/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1185/1/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1185/1/testReport/ |
   | Max. process+thread count | 1423 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1185/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1174: HDDS-1856. Make required changes for Non-HA to use new HA code in OM.

2019-07-30 Thread GitBox
arp7 commented on a change in pull request #1174: HDDS-1856. Make required 
changes for Non-HA to use new HA code in OM.
URL: https://github.com/apache/hadoop/pull/1174#discussion_r308917239
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -169,15 +163,27 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-
+  omResponse.setCreateBucketResponse(
+  CreateBucketResponse.newBuilder().build());
+  omClientResponse = new OMBucketCreateResponse(omBucketInfo,
+  omResponse.build());
 } catch (IOException ex) {
   exception = ex;
+  omClientResponse = new OMBucketCreateResponse(omBucketInfo,
+  createErrorOMResponse(omResponse, exception));
 } finally {
+  if (omClientResponse != null) {
+omClientResponse.setFlushFuture(
 
 Review comment:
   Can the setFlushFuture operation be moved to the caller? I assume every 
request has to do this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #1090: SUBMARINE-72 Kill and destroy the job through the submarine client

2019-07-30 Thread GitBox
jojochuang commented on a change in pull request #1090: SUBMARINE-72 Kill and 
destroy the job through the submarine client
URL: https://github.com/apache/hadoop/pull/1090#discussion_r308841330
 
 

 ##
 File path: 
hadoop-submarine/hadoop-submarine-core/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/KillJobCli.java
 ##
 @@ -0,0 +1,109 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.yarn.submarine.client.cli;
+
+import static org.apache.hadoop.yarn.client.api.AppAdminClient.DEFAULT_TYPE;
+
+import java.io.IOException;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.yarn.client.api.AppAdminClient;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.submarine.client.cli.param.KillJobParameters;
+import org.apache.hadoop.yarn.submarine.client.cli.param.ParametersHolder;
+import org.apache.hadoop.yarn.submarine.common.ClientContext;
+import org.apache.hadoop.yarn.submarine.common.exception.SubmarineException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.annotations.VisibleForTesting;
+
+public class KillJobCli extends AbstractCli {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowJobCli.class);
+
+  private Options options;
+  private ParametersHolder parametersHolder;
+
+  public KillJobCli(ClientContext cliContext) {
+super(cliContext);
+options = generateOptions();
+  }
+
+  public void printUsages() {
+new HelpFormatter().printHelp("job kill", options);
+  }
+
+  private Options generateOptions() {
+Options options = new Options();
+options.addOption(CliConstants.NAME, true, "Name of the job");
+options.addOption("h", "help", false, "Print help");
+return options;
+  }
+
+  private void parseCommandLineAndGetKillJobParameters(String[] args)
+  throws IOException, YarnException {
+// Do parsing
+GnuParser parser = new GnuParser();
+CommandLine cli;
+try {
+  cli = parser.parse(options, args);
+  parametersHolder =
+  ParametersHolder.createWithCmdLine(cli, Command.KILL_JOB);
+  parametersHolder.updateParameters(clientContext);
+} catch (ParseException e) {
+  printUsages();
 
 Review comment:
   Please print the exception before printing the usage. Otherwise it is 
sometimes hard to tell where the command is wrong.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #1090: SUBMARINE-72 Kill and destroy the job through the submarine client

2019-07-30 Thread GitBox
jojochuang commented on a change in pull request #1090: SUBMARINE-72 Kill and 
destroy the job through the submarine client
URL: https://github.com/apache/hadoop/pull/1090#discussion_r308842113
 
 

 ##
 File path: 
hadoop-submarine/hadoop-submarine-core/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/KillJobCli.java
 ##
 @@ -0,0 +1,109 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.yarn.submarine.client.cli;
+
+import static org.apache.hadoop.yarn.client.api.AppAdminClient.DEFAULT_TYPE;
+
+import java.io.IOException;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.yarn.client.api.AppAdminClient;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.submarine.client.cli.param.KillJobParameters;
+import org.apache.hadoop.yarn.submarine.client.cli.param.ParametersHolder;
+import org.apache.hadoop.yarn.submarine.common.ClientContext;
+import org.apache.hadoop.yarn.submarine.common.exception.SubmarineException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.annotations.VisibleForTesting;
+
+public class KillJobCli extends AbstractCli {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowJobCli.class);
+
+  private Options options;
+  private ParametersHolder parametersHolder;
+
+  public KillJobCli(ClientContext cliContext) {
+super(cliContext);
+options = generateOptions();
+  }
+
+  public void printUsages() {
+new HelpFormatter().printHelp("job kill", options);
+  }
+
+  private Options generateOptions() {
+Options options = new Options();
+options.addOption(CliConstants.NAME, true, "Name of the job");
+options.addOption("h", "help", false, "Print help");
+return options;
+  }
+
+  private void parseCommandLineAndGetKillJobParameters(String[] args)
+  throws IOException, YarnException {
+// Do parsing
+GnuParser parser = new GnuParser();
+CommandLine cli;
+try {
+  cli = parser.parse(options, args);
+  parametersHolder =
+  ParametersHolder.createWithCmdLine(cli, Command.KILL_JOB);
+  parametersHolder.updateParameters(clientContext);
+} catch (ParseException e) {
+  printUsages();
+}
+  }
+
+  @VisibleForTesting
+  protected boolean KillJob() throws IOException, YarnException {
+String jobName = getParameters().getName();
+AppAdminClient appAdminClient = AppAdminClient
+.createAppAdminClient(DEFAULT_TYPE, clientContext.getYarnConfig());
+
+if (appAdminClient.actionStop(jobName) != 0
+|| appAdminClient.actionDestroy(jobName) != 0) {
+  LOG.error("Fail to kill job !");
 
 Review comment:
   It is generally preferred to log additional information, in order to help 
troubleshoot why the job doesn't get killed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #1090: SUBMARINE-72 Kill and destroy the job through the submarine client

2019-07-30 Thread GitBox
jojochuang commented on a change in pull request #1090: SUBMARINE-72 Kill and 
destroy the job through the submarine client
URL: https://github.com/apache/hadoop/pull/1090#discussion_r308909022
 
 

 ##
 File path: 
hadoop-submarine/hadoop-submarine-core/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/KillJobCli.java
 ##
 @@ -0,0 +1,109 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+
+package org.apache.hadoop.yarn.submarine.client.cli;
+
+import static org.apache.hadoop.yarn.client.api.AppAdminClient.DEFAULT_TYPE;
+
+import java.io.IOException;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.yarn.client.api.AppAdminClient;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.submarine.client.cli.param.KillJobParameters;
+import org.apache.hadoop.yarn.submarine.client.cli.param.ParametersHolder;
+import org.apache.hadoop.yarn.submarine.common.ClientContext;
+import org.apache.hadoop.yarn.submarine.common.exception.SubmarineException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.annotations.VisibleForTesting;
+
+public class KillJobCli extends AbstractCli {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowJobCli.class);
+
+  private Options options;
+  private ParametersHolder parametersHolder;
+
+  public KillJobCli(ClientContext cliContext) {
+super(cliContext);
+options = generateOptions();
+  }
+
+  public void printUsages() {
+new HelpFormatter().printHelp("job kill", options);
+  }
+
+  private Options generateOptions() {
+Options options = new Options();
+options.addOption(CliConstants.NAME, true, "Name of the job");
+options.addOption("h", "help", false, "Print help");
+return options;
+  }
+
+  private void parseCommandLineAndGetKillJobParameters(String[] args)
+  throws IOException, YarnException {
+// Do parsing
+GnuParser parser = new GnuParser();
+CommandLine cli;
+try {
+  cli = parser.parse(options, args);
+  parametersHolder =
+  ParametersHolder.createWithCmdLine(cli, Command.KILL_JOB);
+  parametersHolder.updateParameters(clientContext);
+} catch (ParseException e) {
+  printUsages();
+}
+  }
+
+  @VisibleForTesting
+  protected boolean KillJob() throws IOException, YarnException {
+String jobName = getParameters().getName();
+AppAdminClient appAdminClient = AppAdminClient
+.createAppAdminClient(DEFAULT_TYPE, clientContext.getYarnConfig());
+
+if (appAdminClient.actionStop(jobName) != 0
+|| appAdminClient.actionDestroy(jobName) != 0) {
+  LOG.error("Fail to kill job !");
 
 Review comment:
   Additional info meaning: whether the job fails to be killed because it was 
unable to stop the job or because it was unable to destroy the job.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16460) ABFS: fix for Sever Name Indication (SNI)

2019-07-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896445#comment-16896445
 ] 

Hudson commented on HADOOP-16460:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17009 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17009/])
HADOOP-16460: ABFS: fix for Sever Name Indication (SNI) (tmarq: rev 
12a526c080ea37d74f1bc1e543943dc847e2d823)
* (edit) hadoop-project/pom.xml


> ABFS: fix for Sever Name Indication (SNI)
> -
>
> Key: HADOOP-16460
> URL: https://issues.apache.org/jira/browse/HADOOP-16460
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Thomas Marquardt
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: DriverTestResult.log, HADOOP-16460.001.patch, 
> image-2019-07-30-10-11-37-970.png
>
>
> We need to update wildfly-openssl to 1.0.7.Final in ./hadoop-project/pom.xml.
>  
> ABFS depends on wildfly-openssl for secure sockets due to the performance 
> improvements. The current wildfly-openssl does not support Server Name 
> Indication (SNI). A fix was made in 
> https://github.com/wildfly/wildfly-openssl/issues/59 and there is an official 
> release of wildfly-openssl with the fix 
> ([https://github.com/wildfly/wildfly-openssl/releases/tag/1.0.7.Final)|https://github.com/wildfly/wildfly-openssl/releases/tag/1.0.7.Final).].
>   The fix has been validated.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16452) Increase ipc.maximum.data.length default from 64MB to 128MB

2019-07-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896446#comment-16896446
 ] 

Hudson commented on HADOOP-16452:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17009 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17009/])
HADOOP-16452. Increase ipc.maximum.data.length default from 64MB to (weichiu: 
rev c75f16db79974ad03afbc366709fe2356d0a633e)
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java


> Increase ipc.maximum.data.length default from 64MB to 128MB
> ---
>
> Key: HADOOP-16452
> URL: https://issues.apache.org/jira/browse/HADOOP-16452
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16452.001.patch, HADOOP-16452.002.patch
>
>
> Reason for bumping the default:
> Denser DataNodes are common. It is not uncommon to find a DataNode with > 7 
> million blocks these days.
> With such a high number of blocks, the block report message can exceed the 
> 64mb limit (defined by ipc.maximum.data.length). The block reports are 
> rejected, causing missing blocks in HDFS. We had to double this configuration 
> value in order to work around the issue.
> We are seeing an increasing number of these cases. I think it's time to 
> revisit some of these default values as the hardware evolves.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1186: HADOOP-16472. findbugs warning on LocalMetadataStore.ttlTimeProvider sync

2019-07-30 Thread GitBox
hadoop-yetus commented on issue #1186: HADOOP-16472. findbugs warning on 
LocalMetadataStore.ttlTimeProvider sync
URL: https://github.com/apache/hadoop/pull/1186#issuecomment-516560775
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1051 | trunk passed |
   | +1 | compile | 34 | trunk passed |
   | +1 | checkstyle | 20 | trunk passed |
   | +1 | mvnsite | 39 | trunk passed |
   | +1 | shadedclient | 670 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | trunk passed |
   | 0 | spotbugs | 58 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | -1 | findbugs | 55 | hadoop-tools/hadoop-aws in trunk has 1 extant 
findbugs warnings. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | -0 | checkstyle | 16 | hadoop-tools/hadoop-aws: The patch generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 696 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | the patch passed |
   | +1 | findbugs | 63 | hadoop-tools/hadoop-aws generated 0 new + 0 unchanged 
- 1 fixed = 0 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 280 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3192 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1186/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1186 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2983e3b0c642 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c75f16d |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1186/1/artifact/out/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1186/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1186/1/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1186/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16452) Increase ipc.maximum.data.length default from 64MB to 128MB

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16452:
-
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed 002 patch to trunk. Thanks [~anu] [~sodonnell] and [~xkrogen] for 
comments.

I will continue to revisit existing configuration default values and update as 
needed.

> Increase ipc.maximum.data.length default from 64MB to 128MB
> ---
>
> Key: HADOOP-16452
> URL: https://issues.apache.org/jira/browse/HADOOP-16452
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16452.001.patch, HADOOP-16452.002.patch
>
>
> Reason for bumping the default:
> Denser DataNodes are common. It is not uncommon to find a DataNode with > 7 
> million blocks these days.
> With such a high number of blocks, the block report message can exceed the 
> 64mb limit (defined by ipc.maximum.data.length). The block reports are 
> rejected, causing missing blocks in HDFS. We had to double this configuration 
> value in order to work around the issue.
> We are seeing an increasing number of these cases. I think it's time to 
> revisit some of these default values as the hardware evolves.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16439) Upgrade bundled Tomcat in branch-2

2019-07-30 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896374#comment-16896374
 ] 

Wei-Chiu Chuang commented on HADOOP-16439:
--

Patch 001 LGTM. Ran all HDFS unit tests and the only failures are unrelated 
(HDFS-14681, HDFS-14682)

> Upgrade bundled Tomcat in branch-2
> --
>
> Key: HADOOP-16439
> URL: https://issues.apache.org/jira/browse/HADOOP-16439
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: httpfs, kms
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HADOOP-16439-branch-2.000.patch, 
> HADOOP-16439-branch-2.001.patch
>
>
> proposed by  [~jojochuang] in mailing list:
> {quote}We migrated from Tomcat to Jetty in Hadoop3, because Tomcat 6 went EOL 
> in
>  2016. But we did not realize three years after Tomcat 6's EOL, a majority
>  of Hadoop users are still in Hadoop 2, and it looks like Hadoop 2 will stay
>  alive for another few years.
> Backporting Jetty to Hadoop2 is probably too big of an imcompatibility.
>  How about migrating to Tomcat9?
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16405) Upgrade Wildfly Openssl version to 1.0.7.Final

2019-07-30 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt resolved HADOOP-16405.
---
   Resolution: Fixed
Fix Version/s: 3.3.0

Duplicate of HADOOP-16460.

> Upgrade Wildfly Openssl version to 1.0.7.Final
> --
>
> Key: HADOOP-16405
> URL: https://issues.apache.org/jira/browse/HADOOP-16405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/azure
>Affects Versions: 3.2.0
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Fix For: 3.3.0
>
>
> Upgrade Wildfly Openssl version to 1.0.7.Final. This version has SNI support 
> which is essential for firewall enabled clusters along with many stability 
> related fixes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16460) ABFS: fix for Sever Name Indication (SNI)

2019-07-30 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-16460:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1

Committed to trunk:

commit 12a526c080ea37d74f1bc1e543943dc847e2d823
Author: Sneha Vijayarajan 
Date: Tue Jul 30 15:18:15 2019 +

HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)

Contributed by Sneha Vijayarajan 

hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

> ABFS: fix for Sever Name Indication (SNI)
> -
>
> Key: HADOOP-16460
> URL: https://issues.apache.org/jira/browse/HADOOP-16460
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Thomas Marquardt
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: DriverTestResult.log, HADOOP-16460.001.patch, 
> image-2019-07-30-10-11-37-970.png
>
>
> We need to update wildfly-openssl to 1.0.7.Final in ./hadoop-project/pom.xml.
>  
> ABFS depends on wildfly-openssl for secure sockets due to the performance 
> improvements. The current wildfly-openssl does not support Server Name 
> Indication (SNI). A fix was made in 
> https://github.com/wildfly/wildfly-openssl/issues/59 and there is an official 
> release of wildfly-openssl with the fix 
> ([https://github.com/wildfly/wildfly-openssl/releases/tag/1.0.7.Final)|https://github.com/wildfly/wildfly-openssl/releases/tag/1.0.7.Final).].
>   The fix has been validated.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16460) ABFS: fix for Sever Name Indication (SNI)

2019-07-30 Thread Thomas Marquardt (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-16460:
--
Fix Version/s: 3.3.0

> ABFS: fix for Sever Name Indication (SNI)
> -
>
> Key: HADOOP-16460
> URL: https://issues.apache.org/jira/browse/HADOOP-16460
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Thomas Marquardt
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: DriverTestResult.log, HADOOP-16460.001.patch, 
> image-2019-07-30-10-11-37-970.png
>
>
> We need to update wildfly-openssl to 1.0.7.Final in ./hadoop-project/pom.xml.
>  
> ABFS depends on wildfly-openssl for secure sockets due to the performance 
> improvements. The current wildfly-openssl does not support Server Name 
> Indication (SNI). A fix was made in 
> https://github.com/wildfly/wildfly-openssl/issues/59 and there is an official 
> release of wildfly-openssl with the fix 
> ([https://github.com/wildfly/wildfly-openssl/releases/tag/1.0.7.Final)|https://github.com/wildfly/wildfly-openssl/releases/tag/1.0.7.Final).].
>   The fix has been validated.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s…

2019-07-30 Thread GitBox
avijayanhwx commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should 
delete previously created s…
URL: https://github.com/apache/hadoop/pull/1163#issuecomment-516514965
 
 
   I will work on adding a configurable policy in Ratis so that Ozone Manager 
can also configure it as needed. After that is done, I will update this PR to 
use that policy.  cc @mukul1987 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx closed pull request #1004: HDDS-1718 : Increase Ratis Leader election timeout default to 10 seconds

2019-07-30 Thread GitBox
avijayanhwx closed pull request #1004: HDDS-1718 : Increase Ratis Leader 
election timeout default to 10 seconds
URL: https://github.com/apache/hadoop/pull/1004
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1185: HADOOP-16470 IAMInstanceCredentialsProvider to use EC2ContainerCredentialsProviderWrapper

2019-07-30 Thread GitBox
bgaborg commented on issue #1185: HADOOP-16470 IAMInstanceCredentialsProvider 
to use EC2ContainerCredentialsProviderWrapper
URL: https://github.com/apache/hadoop/pull/1185#issuecomment-516500347
 
 
   I'll try this locally from my machine, but we can't simply add an 
integration test for this, right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #1182: HDDS-1872. Fix entry clean up from openKeyTable during complete MPU.

2019-07-30 Thread GitBox
bharatviswa504 edited a comment on issue #1182: HDDS-1872. Fix entry clean up 
from openKeyTable during complete MPU.
URL: https://github.com/apache/hadoop/pull/1182#issuecomment-516494090
 
 
   Thank You @anuengineer for the review.
   Test failures are not related to this patch.
   I will commit this to the trunk and ozone-0.4.1 branch.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1160: HADOOP-16458 LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3

2019-07-30 Thread GitBox
steveloughran commented on a change in pull request #1160: HADOOP-16458 
LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3
URL: https://github.com/apache/hadoop/pull/1160#discussion_r308821497
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestLocatedFileStatusFetcher.java
 ##
 @@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Test the LocatedFileStatusFetcher can do.
+ * This is related to HADOOP-16458.
+ * There's basic tests in ITestS3AFSMainOperations; this
+ * is see if we can create better corner cases.
+ */
+public class ITestLocatedFileStatusFetcher extends AbstractS3ATestBase {
 
 Review comment:
   Either I implement a test here or I cut the file


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12282) Connection thread's name should be updated after address changing is detected

2019-07-30 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896290#comment-16896290
 ] 

Lisheng Sun commented on HADOOP-12282:
--

uploaded the patch for v2.

> Connection thread's name should be updated after address changing is detected
> -
>
> Key: HADOOP-12282
> URL: https://issues.apache.org/jira/browse/HADOOP-12282
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: zhouyingchao
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-12282-001.patch, HADOOP-12282.002.patch
>
>
> In a hadoop hdfs cluster, I changed the standby Namenode's ip address (the 
> hostname is not changed and the routing tables are updated). After the 
> change, the cluster is running as normal.
>  However, I found that the debug message of datanode's IPC still prints the 
> original ip address. By looking into the implementation, it turns out that 
> the original address is used as the thread's name. I think the thread's name 
> should be changed if the address change is detected.  Because one of the 
> constituent elements of the thread's name is server.
> {code:java}
> Connection(ConnectionId remoteId, int serviceClass,
> Consumer removeMethod) {
> ..
> UserGroupInformation ticket = remoteId.getTicket();
> // try SASL if security is enabled or if the ugi contains tokens.
> // this causes a SIMPLE client with tokens to attempt SASL
> boolean trySasl = UserGroupInformation.isSecurityEnabled() ||
>   (ticket != null && !ticket.getTokens().isEmpty());
> this.authProtocol = trySasl ? AuthProtocol.SASL : AuthProtocol.NONE;
> this.setName("IPC Client (" + socketFactory.hashCode() +") connection to " +
> server.toString() +
> " from " + ((ticket==null)?"an unknown user":ticket.getUserName()));
> this.setDaemon(true);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1182: HDDS-1872. Fix entry clean up from openKeyTable during complete MPU.

2019-07-30 Thread GitBox
bharatviswa504 merged pull request #1182: HDDS-1872. Fix entry clean up from 
openKeyTable during complete MPU.
URL: https://github.com/apache/hadoop/pull/1182
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1182: HDDS-1872. Fix entry clean up from openKeyTable during complete MPU.

2019-07-30 Thread GitBox
bharatviswa504 commented on issue #1182: HDDS-1872. Fix entry clean up from 
openKeyTable during complete MPU.
URL: https://github.com/apache/hadoop/pull/1182#issuecomment-516494090
 
 
   Test failures are not related to this patch.
   I will commit this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12282) Connection thread's name should be updated after address changing is detected

2019-07-30 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-12282:
-
Attachment: HADOOP-12282.002.patch

> Connection thread's name should be updated after address changing is detected
> -
>
> Key: HADOOP-12282
> URL: https://issues.apache.org/jira/browse/HADOOP-12282
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: zhouyingchao
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-12282-001.patch, HADOOP-12282.002.patch
>
>
> In a hadoop hdfs cluster, I changed the standby Namenode's ip address (the 
> hostname is not changed and the routing tables are updated). After the 
> change, the cluster is running as normal.
>  However, I found that the debug message of datanode's IPC still prints the 
> original ip address. By looking into the implementation, it turns out that 
> the original address is used as the thread's name. I think the thread's name 
> should be changed if the address change is detected.  Because one of the 
> constituent elements of the thread's name is server.
> {code:java}
> Connection(ConnectionId remoteId, int serviceClass,
> Consumer removeMethod) {
> ..
> UserGroupInformation ticket = remoteId.getTicket();
> // try SASL if security is enabled or if the ugi contains tokens.
> // this causes a SIMPLE client with tokens to attempt SASL
> boolean trySasl = UserGroupInformation.isSecurityEnabled() ||
>   (ticket != null && !ticket.getTokens().isEmpty());
> this.authProtocol = trySasl ? AuthProtocol.SASL : AuthProtocol.NONE;
> this.setName("IPC Client (" + socketFactory.hashCode() +") connection to " +
> server.toString() +
> " from " + ((ticket==null)?"an unknown user":ticket.getUserName()));
> this.setDaemon(true);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1186: HADOOP-16472. findbugs warning on LocalMetadataStore.ttlTimeProvider sync

2019-07-30 Thread GitBox
steveloughran commented on issue #1186: HADOOP-16472. findbugs warning on 
LocalMetadataStore.ttlTimeProvider sync
URL: https://github.com/apache/hadoop/pull/1186#issuecomment-516492601
 
 
   thx. somehow we missed this earlier


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12282) Connection thread's name should be updated after address changing is detected

2019-07-30 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-12282:
-
Description: 
In a hadoop hdfs cluster, I changed the standby Namenode's ip address (the 
hostname is not changed and the routing tables are updated). After the change, 
the cluster is running as normal.
 However, I found that the debug message of datanode's IPC still prints the 
original ip address. By looking into the implementation, it turns out that the 
original address is used as the thread's name. I think the thread's name should 
be changed if the address change is detected.  Because one of the constituent 
elements of the thread's name is server.
{code:java}
Connection(ConnectionId remoteId, int serviceClass,
Consumer removeMethod) {

..
UserGroupInformation ticket = remoteId.getTicket();
// try SASL if security is enabled or if the ugi contains tokens.
// this causes a SIMPLE client with tokens to attempt SASL
boolean trySasl = UserGroupInformation.isSecurityEnabled() ||
  (ticket != null && !ticket.getTokens().isEmpty());
this.authProtocol = trySasl ? AuthProtocol.SASL : AuthProtocol.NONE;

this.setName("IPC Client (" + socketFactory.hashCode() +") connection to " +
server.toString() +
" from " + ((ticket==null)?"an unknown user":ticket.getUserName()));
this.setDaemon(true);
}{code}

  was:
In a hadoop hdfs cluster, I changed the standby Namenode's ip address (the 
hostname is not changed and the routing tables are updated). After the change, 
the cluster is running as normal.
 However, I found that the debug message of datanode's IPC still prints the 
original ip address. By looking into the implementation, it turns out that the 
original address is used as the thread's name. I think the thread's name should 
be changed if the address change is detected.

 
{code:java}
Connection(ConnectionId remoteId, int serviceClass,
Consumer removeMethod) {

..
UserGroupInformation ticket = remoteId.getTicket();
// try SASL if security is enabled or if the ugi contains tokens.
// this causes a SIMPLE client with tokens to attempt SASL
boolean trySasl = UserGroupInformation.isSecurityEnabled() ||
  (ticket != null && !ticket.getTokens().isEmpty());
this.authProtocol = trySasl ? AuthProtocol.SASL : AuthProtocol.NONE;

this.setName("IPC Client (" + socketFactory.hashCode() +") connection to " +
server.toString() +
" from " + ((ticket==null)?"an unknown user":ticket.getUserName()));
this.setDaemon(true);
}{code}


> Connection thread's name should be updated after address changing is detected
> -
>
> Key: HADOOP-12282
> URL: https://issues.apache.org/jira/browse/HADOOP-12282
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: zhouyingchao
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-12282-001.patch
>
>
> In a hadoop hdfs cluster, I changed the standby Namenode's ip address (the 
> hostname is not changed and the routing tables are updated). After the 
> change, the cluster is running as normal.
>  However, I found that the debug message of datanode's IPC still prints the 
> original ip address. By looking into the implementation, it turns out that 
> the original address is used as the thread's name. I think the thread's name 
> should be changed if the address change is detected.  Because one of the 
> constituent elements of the thread's name is server.
> {code:java}
> Connection(ConnectionId remoteId, int serviceClass,
> Consumer removeMethod) {
> ..
> UserGroupInformation ticket = remoteId.getTicket();
> // try SASL if security is enabled or if the ugi contains tokens.
> // this causes a SIMPLE client with tokens to attempt SASL
> boolean trySasl = UserGroupInformation.isSecurityEnabled() ||
>   (ticket != null && !ticket.getTokens().isEmpty());
> this.authProtocol = trySasl ? AuthProtocol.SASL : AuthProtocol.NONE;
> this.setName("IPC Client (" + socketFactory.hashCode() +") connection to " +
> server.toString() +
> " from " + ((ticket==null)?"an unknown user":ticket.getUserName()));
> this.setDaemon(true);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12282) Connection thread's name should be updated after address changing is detected

2019-07-30 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-12282:
-
Description: 
In a hadoop hdfs cluster, I changed the standby Namenode's ip address (the 
hostname is not changed and the routing tables are updated). After the change, 
the cluster is running as normal.
 However, I found that the debug message of datanode's IPC still prints the 
original ip address. By looking into the implementation, it turns out that the 
original address is used as the thread's name. I think the thread's name should 
be changed if the address change is detected.

 
{code:java}
Connection(ConnectionId remoteId, int serviceClass,
Consumer removeMethod) {

..
UserGroupInformation ticket = remoteId.getTicket();
// try SASL if security is enabled or if the ugi contains tokens.
// this causes a SIMPLE client with tokens to attempt SASL
boolean trySasl = UserGroupInformation.isSecurityEnabled() ||
  (ticket != null && !ticket.getTokens().isEmpty());
this.authProtocol = trySasl ? AuthProtocol.SASL : AuthProtocol.NONE;

this.setName("IPC Client (" + socketFactory.hashCode() +") connection to " +
server.toString() +
" from " + ((ticket==null)?"an unknown user":ticket.getUserName()));
this.setDaemon(true);
}{code}

  was:
In a hadoop hdfs cluster, I changed the standby Namenode's ip address (the 
hostname is not changed and the routing tables are updated). After the change, 
the cluster is running as normal.
However, I found that the debug message of datanode's IPC still prints the 
original ip address. By looking into the implementation, it turns out that the 
original address is used as the thread's name.  I think the thread's name 
should be changed if the address change is detected.



> Connection thread's name should be updated after address changing is detected
> -
>
> Key: HADOOP-12282
> URL: https://issues.apache.org/jira/browse/HADOOP-12282
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: zhouyingchao
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-12282-001.patch
>
>
> In a hadoop hdfs cluster, I changed the standby Namenode's ip address (the 
> hostname is not changed and the routing tables are updated). After the 
> change, the cluster is running as normal.
>  However, I found that the debug message of datanode's IPC still prints the 
> original ip address. By looking into the implementation, it turns out that 
> the original address is used as the thread's name. I think the thread's name 
> should be changed if the address change is detected.
>  
> {code:java}
> Connection(ConnectionId remoteId, int serviceClass,
> Consumer removeMethod) {
> ..
> UserGroupInformation ticket = remoteId.getTicket();
> // try SASL if security is enabled or if the ugi contains tokens.
> // this causes a SIMPLE client with tokens to attempt SASL
> boolean trySasl = UserGroupInformation.isSecurityEnabled() ||
>   (ticket != null && !ticket.getTokens().isEmpty());
> this.authProtocol = trySasl ? AuthProtocol.SASL : AuthProtocol.NONE;
> this.setName("IPC Client (" + socketFactory.hashCode() +") connection to " +
> server.toString() +
> " from " + ((ticket==null)?"an unknown user":ticket.getUserName()));
> this.setDaemon(true);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16474) S3Guard ProgressiveRenameTracker to mark dest dir as authoritative on success

2019-07-30 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896276#comment-16896276
 ] 

Steve Loughran commented on HADOOP-16474:
-

Proposed, some new DDB method (markRenameDestAuth(path, bulkoperation, auth)), 
only use it in the rename trackers. and then only when the rename is 
successful. This would benefit all apps which (still) use rename to commit work

> S3Guard ProgressiveRenameTracker to mark dest dir as authoritative on success
> -
>
> Key: HADOOP-16474
> URL: https://issues.apache.org/jira/browse/HADOOP-16474
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> After a directory rename is successful, the destination will contain only 
> those files which have been copied by the S3guard-enabled client, with the 
> directory tree updated as new entries are added.
> At that point, the ProgressiveRenameTracker could tell the store to complete 
> the rename and in so doing, give clients maximum performance without needing 
> any LIST commands.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16474) S3Guard ProgressiveRenameTracker to mark dest dir as authoritative on success

2019-07-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16474:
---

 Summary: S3Guard ProgressiveRenameTracker to mark dest dir as 
authoritative on success
 Key: HADOOP-16474
 URL: https://issues.apache.org/jira/browse/HADOOP-16474
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


After a directory rename is successful, the destination will contain only those 
files which have been copied by the S3guard-enabled client, with the directory 
tree updated as new entries are added.

At that point, the ProgressiveRenameTracker could tell the store to complete 
the rename and in so doing, give clients maximum performance without needing 
any LIST commands.




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16473) S3Guard prune to only remove auth dir marker if files (not tombstones) are removed

2019-07-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16473:
---

 Summary: S3Guard prune to only remove auth dir marker if files 
(not tombstones) are removed
 Key: HADOOP-16473
 URL: https://issues.apache.org/jira/browse/HADOOP-16473
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran


the {{s3guard prune}} command marks all dirs as non-auth if an entry was 
deleted. This makes sense from a performance perspective. But if only 
tombstones are being purged, it doesn't -all it does is hurt the performance of 
future scans



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1186: HADOOP-16472. findbugs warning on LocalMetadataStore.ttlTimeProvider sync

2019-07-30 Thread GitBox
bgaborg commented on issue #1186: HADOOP-16472. findbugs warning on 
LocalMetadataStore.ttlTimeProvider sync
URL: https://github.com/apache/hadoop/pull/1186#issuecomment-516483791
 
 
   I wanted to do the same fix, so if findbugs is happy I'll commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16472) findbugs warning on LocalMetadataStore.ttlTimeProvider sync

2019-07-30 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896261#comment-16896261
 ] 

Gabor Bota commented on HADOOP-16472:
-

I see that you already have a PR for this so I've unassigned myself.

> findbugs warning on LocalMetadataStore.ttlTimeProvider sync
> ---
>
> Key: HADOOP-16472
> URL: https://issues.apache.org/jira/browse/HADOOP-16472
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> This is a minor issue codewise, but its interfering with all PR test runs, so 
> I need it fixed. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16472) findbugs warning on LocalMetadataStore.ttlTimeProvider sync

2019-07-30 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-16472:
---

Assignee: Steve Loughran  (was: Gabor Bota)

> findbugs warning on LocalMetadataStore.ttlTimeProvider sync
> ---
>
> Key: HADOOP-16472
> URL: https://issues.apache.org/jira/browse/HADOOP-16472
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> This is a minor issue codewise, but its interfering with all PR test runs, so 
> I need it fixed. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16453) Remove useless trace log in NetUtils.java

2019-07-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-16453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896259#comment-16896259
 ] 

Íñigo Goiri commented on HADOOP-16453:
--

My main issue is that we are catching throwable, I think this should catch the 
expected exception from not finding the constructor and just return the 
exception instead of throwing.
Then if it's an unexpected exception we can handle it normally.
Right now, throwing the exception for something somewhat expected seems 
overkilled.

> Remove useless trace log in NetUtils.java
> -
>
> Key: HADOOP-16453
> URL: https://issues.apache.org/jira/browse/HADOOP-16453
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16453.001.patch
>
>
> When there is no String Constructor for the exception, we Log a Trace 
> Message. Given that log and throw is not a very good approach I think the 
> right thing would be to just not log it at all as HADOOP-16431.
> {code:java}
> private static  T wrapWithMessage(
> T exception, String msg) throws T {
>   Class clazz = exception.getClass();
>   try {
> Constructor ctor = 
> clazz.getConstructor(String.class);
> Throwable t = ctor.newInstance(msg);
> return (T)(t.initCause(exception));
>   } catch (Throwable e) {
> LOG.trace("Unable to wrap exception of type {}: it has no (String) "
> + "constructor", clazz, e);
> throw exception;
>   }
> }
> {code}
>  *exception stack:*
> {code:java}
> 19/07/12 11:23:45 INFO mapreduce.JobSubmitter: Executing with tokens: [Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:azorprc-xiaomi, Ident: (token for 
> sql_prc: HDFS_DELEGATION_TOKEN owner=sql_prc/hadoop@XIAOMI.HADOOP, 
> renewer=yarn_prc, realUser=, issueDate=1562901814007, maxDate=1594437814007, 
> sequenceNumber=3349939, masterKeyId=1400)]
> 19/07/12 11:23:46 TRACE net.NetUtils: Unable to wrap exception of type class 
> java.nio.channels.ClosedByInterruptException: it has no (String) constructor
> java.lang.NoSuchMethodException: 
> java.nio.channels.ClosedByInterruptException.(java.lang.String)
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.getConstructor(Class.java:1825)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:830)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1559)
> at org.apache.hadoop.ipc.Client.call(Client.java:1501)
> at org.apache.hadoop.ipc.Client.call(Client.java:1411)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:949)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler$1.call(RequestHedgingProxyProvider.java:143)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 WARN ipc.Client: Exception encountered while connecting to 
> the server : java.io.InterruptedIOException: Interrupted while waiting for IO 
> on channel java.nio.channels.SocketChannel[connected 
> local=/10.118.30.48:34324 

[jira] [Assigned] (HADOOP-16472) findbugs warning on LocalMetadataStore.ttlTimeProvider sync

2019-07-30 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-16472:
---

Assignee: Gabor Bota

> findbugs warning on LocalMetadataStore.ttlTimeProvider sync
> ---
>
> Key: HADOOP-16472
> URL: https://issues.apache.org/jira/browse/HADOOP-16472
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> This is a minor issue codewise, but its interfering with all PR test runs, so 
> I need it fixed. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1171: HDDS-1834. parent directories not found in secure setup due to ACL check

2019-07-30 Thread GitBox
adoroszlai commented on issue #1171: HDDS-1834. parent directories not found in 
secure setup due to ACL check
URL: https://github.com/apache/hadoop/pull/1171#issuecomment-516474808
 
 
   Thank you for the reviews @anuengineer @lokeshj1703 @xiaoyuyao.  Can someone 
please commit it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15910) Javadoc for LdapAuthenticationHandler#ENABLE_START_TLS is wrong

2019-07-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896214#comment-16896214
 ] 

Hudson commented on HADOOP-15910:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17008 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17008/])
HADOOP-15910. Fix Javadoc for LdapAuthenticationHandler#ENABLE_START_TLS 
(stevel: rev 204a977f556ae4fd279abb568dd852afe093718a)
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/LdapAuthenticationHandler.java


> Javadoc for LdapAuthenticationHandler#ENABLE_START_TLS is wrong
> ---
>
> Key: HADOOP-15910
> URL: https://issues.apache.org/jira/browse/HADOOP-15910
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Don Jeba
>Priority: Trivial
> Fix For: 3.3.0
>
>
> In LdapAuthenticationHandler, the javadoc for ENABLE_START_TLS has the same 
> contents for BASE_DN



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16469) Typo in s3a committers.md doc

2019-07-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896215#comment-16896215
 ] 

Hudson commented on HADOOP-16469:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17008 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17008/])
HADOOP-16469. Update committers.md (stevel: rev 
bca86bd289137dade85c125d37a64b29035b6086)
* (edit) 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md


> Typo in s3a committers.md doc
> -
>
> Key: HADOOP-16469
> URL: https://issues.apache.org/jira/browse/HADOOP-16469
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Minor
>
> There's a typo in the s3a committers doc'; PR to fix it filed



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #1186: HADOOP-16472. findbugs warning on LocalMetadataStore.ttlTimeProvider sync

2019-07-30 Thread GitBox
steveloughran opened a new pull request #1186: HADOOP-16472. findbugs warning 
on LocalMetadataStore.ttlTimeProvider sync
URL: https://github.com/apache/hadoop/pull/1186
 
 
   Moved the setter and addAncestors to synchronized
   
   Untested! Let's see what findbugs says first


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16472) findbugs warning on LocalMetadataStore.ttlTimeProvider sync

2019-07-30 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896171#comment-16896171
 ] 

Steve Loughran commented on HADOOP-16472:
-

{code}
Multithreaded correctness Warnings
CodeWarning
IS  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.ttlTimeProvider; locked 75% 
of time
Bug type IS2_INCONSISTENT_SYNC (click for details) 
In class org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore
Field org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.ttlTimeProvider
Synchronized 75% of the time
Unsynchronized access at LocalMetadataStore.java:[line 623]
Unsynchronized access at LocalMetadataStore.java:[line 609]
Synchronized access at LocalMetadataStore.java:[line 156]
Synchronized access at LocalMetadataStore.java:[line 514]
Synchronized access at LocalMetadataStore.java:[line 541]
Synchronized access at LocalMetadataStore.java:[line 117]
Synchronized access at LocalMetadataStore.java:[line 328]
Synchronized access at LocalMetadataStore.java:[line 206]
Synchronized access at LocalMetadataStore.java:[line 207]
{code}

> findbugs warning on LocalMetadataStore.ttlTimeProvider sync
> ---
>
> Key: HADOOP-16472
> URL: https://issues.apache.org/jira/browse/HADOOP-16472
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> This is a minor issue codewise, but its interfering with all PR test runs, so 
> I need it fixed. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16471) Restore (documented) fs.s3a.SharedInstanceProfileCredentialsProvider

2019-07-30 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16471:

Parent Issue: HADOOP-15620  (was: HADOOP-15619)

> Restore (documented) fs.s3a.SharedInstanceProfileCredentialsProvider
> 
>
> Key: HADOOP-16471
> URL: https://issues.apache.org/jira/browse/HADOOP-16471
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> HADOOP-14248 cut the (obsolete) SharedInstanceProfileCredentialsProvider AWS 
> credential provider.
> But I've noticed it turns up in documentation; people may still be using it.
> Proposed: branch-3 to restore it, while hinting that people should stop



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16470) make last AWS credential provider in default auth chain EC2ContainerCredentialsProviderWrapper

2019-07-30 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16470:

Parent Issue: HADOOP-15620  (was: HADOOP-15619)

> make last AWS credential provider in default auth chain 
> EC2ContainerCredentialsProviderWrapper
> --
>
> Key: HADOOP-16470
> URL: https://issues.apache.org/jira/browse/HADOOP-16470
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> There's a new credential provider in the AWS SDK, 
> {{com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper}}
>  this is designed to work within AWS containers as well as EC2 VMs, using env 
> vars to find container credentials first, falling back to the IAM metadata 
> service. This way, when deployed in a container or EC2 VM, it will always 
> find the session credentials for the deployed IAM Role



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16472) findbugs warning on LocalMetadataStore.ttlTimeProvider sync

2019-07-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16472:
---

 Summary: findbugs warning on LocalMetadataStore.ttlTimeProvider 
sync
 Key: HADOOP-16472
 URL: https://issues.apache.org/jira/browse/HADOOP-16472
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


This is a minor issue codewise, but its interfering with all PR test runs, so I 
need it fixed. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1041: HADOOP-15844. Tag S3GuardTool entry points as LimitedPrivate/Evolving

2019-07-30 Thread GitBox
steveloughran commented on issue #1041: HADOOP-15844. Tag S3GuardTool entry 
points as LimitedPrivate/Evolving
URL: https://github.com/apache/hadoop/pull/1041#issuecomment-516436634
 
 
   Tracked down where the management tools ref was used already 
org.apache.hadoop.conf.ReconfigurationTaskStatus
   
   Discussion here was to tell hadoop developers that this was used in places, 
and avoided us saying this was hard coded to any specific product
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1185: HADOOP-16470 IAMInstanceCredentialsProvider to use EC2ContainerCredentialsProviderWrapper

2019-07-30 Thread GitBox
steveloughran commented on issue #1185: HADOOP-16470 
IAMInstanceCredentialsProvider to use EC2ContainerCredentialsProviderWrapper
URL: https://github.com/apache/hadoop/pull/1185#issuecomment-516434646
 
 
   +sid, gabor, be good to get your reviews here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1185: HADOOP-16470 IAMInstanceCredentialsProvider to use EC2ContainerCredentialsProviderWrapper

2019-07-30 Thread GitBox
steveloughran commented on issue #1185: HADOOP-16470 
IAMInstanceCredentialsProvider to use EC2ContainerCredentialsProviderWrapper
URL: https://github.com/apache/hadoop/pull/1185#issuecomment-516434259
 
 
   Tested: S3 ireland: -Dparallel-tests -DtestsThreadCount=6 -Ds3guard 
-Ddynamodb -Dnonauth
   
   *I am really happy all these tests are working*


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #1185: HADOOP-16470 IAMInstanceCredentialsProvider to use EC2ContainerCredentialsProviderWrapper

2019-07-30 Thread GitBox
steveloughran opened a new pull request #1185: HADOOP-16470 
IAMInstanceCredentialsProvider to use EC2ContainerCredentialsProviderWrapper
URL: https://github.com/apache/hadoop/pull/1185
 
 
   * contains HADOOP-16471 -restoration of 
SharedInstanceProfileCredentialsProvider
   
   Change-Id: I6ec2c3585ad1966d6664465a2b3fbfa25fbda46f


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16470) make last AWS credential provider in default auth chain EC2ContainerCredentialsProviderWrapper

2019-07-30 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896165#comment-16896165
 ] 

Steve Loughran commented on HADOOP-16470:
-

Proposed: make the org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider 
use com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper as its inner 
provider; restore SharedInstanceCredentialProvider to subclass that class.

As a result: code will track container binding and future changes

> make last AWS credential provider in default auth chain 
> EC2ContainerCredentialsProviderWrapper
> --
>
> Key: HADOOP-16470
> URL: https://issues.apache.org/jira/browse/HADOOP-16470
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> There's a new credential provider in the AWS SDK, 
> {{com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper}}
>  this is designed to work within AWS containers as well as EC2 VMs, using env 
> vars to find container credentials first, falling back to the IAM metadata 
> service. This way, when deployed in a container or EC2 VM, it will always 
> find the session credentials for the deployed IAM Role



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16453) Remove useless trace log in NetUtils.java

2019-07-30 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896136#comment-16896136
 ] 

Ayush Saxena commented on HADOOP-16453:
---

Actually when I encountered this, I too just removed the log, Since this 
exception seems unavoidable, It was a common part of code, and there won't be 
just {{ClosedByInterruptException}} that doesn't have string as parameter in 
constructor and knowing all them and handling each one wouldn't be possible. 
And moreover, the exception stays as it is : just that our attempt to put the 
message inside that failed, and that anyhow we can't add also.

[~elgoiri] any opinions how can we handle it? 

> Remove useless trace log in NetUtils.java
> -
>
> Key: HADOOP-16453
> URL: https://issues.apache.org/jira/browse/HADOOP-16453
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16453.001.patch
>
>
> When there is no String Constructor for the exception, we Log a Trace 
> Message. Given that log and throw is not a very good approach I think the 
> right thing would be to just not log it at all as HADOOP-16431.
> {code:java}
> private static  T wrapWithMessage(
> T exception, String msg) throws T {
>   Class clazz = exception.getClass();
>   try {
> Constructor ctor = 
> clazz.getConstructor(String.class);
> Throwable t = ctor.newInstance(msg);
> return (T)(t.initCause(exception));
>   } catch (Throwable e) {
> LOG.trace("Unable to wrap exception of type {}: it has no (String) "
> + "constructor", clazz, e);
> throw exception;
>   }
> }
> {code}
>  *exception stack:*
> {code:java}
> 19/07/12 11:23:45 INFO mapreduce.JobSubmitter: Executing with tokens: [Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:azorprc-xiaomi, Ident: (token for 
> sql_prc: HDFS_DELEGATION_TOKEN owner=sql_prc/hadoop@XIAOMI.HADOOP, 
> renewer=yarn_prc, realUser=, issueDate=1562901814007, maxDate=1594437814007, 
> sequenceNumber=3349939, masterKeyId=1400)]
> 19/07/12 11:23:46 TRACE net.NetUtils: Unable to wrap exception of type class 
> java.nio.channels.ClosedByInterruptException: it has no (String) constructor
> java.lang.NoSuchMethodException: 
> java.nio.channels.ClosedByInterruptException.(java.lang.String)
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.getConstructor(Class.java:1825)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:830)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1559)
> at org.apache.hadoop.ipc.Client.call(Client.java:1501)
> at org.apache.hadoop.ipc.Client.call(Client.java:1411)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:949)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler$1.call(RequestHedgingProxyProvider.java:143)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 INFO Configuration.deprecation: No unit for 
> dfs.client.datanode-restart.timeout(30) assuming SECONDS
> 19/07/12 11:23:46 WARN ipc.Client: Exception encountered while connecting to 

[jira] [Created] (HADOOP-16471) Restore (documented) fs.s3a.SharedInstanceProfileCredentialsProvider

2019-07-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16471:
---

 Summary: Restore (documented) 
fs.s3a.SharedInstanceProfileCredentialsProvider
 Key: HADOOP-16471
 URL: https://issues.apache.org/jira/browse/HADOOP-16471
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


HADOOP-14248 cut the (obsolete) SharedInstanceProfileCredentialsProvider AWS 
credential provider.

But I've noticed it turns up in documentation; people may still be using it.

Proposed: branch-3 to restore it, while hinting that people should stop



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16470) make last AWS credential provider in default auth chain EC2ContainerCredentialsProviderWrapper

2019-07-30 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896102#comment-16896102
 ] 

Steve Loughran commented on HADOOP-16470:
-

{code}
/**
 * 
 * {@link AWSCredentialsProvider} that loads credentials from an Amazon 
Container (e.g. EC2)
 *
 * Credentials are solved in the following order:
 * 
 * 
 * If environment variable "AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" is
 * set (typically on EC2) it is used to hit the metadata service at the 
following endpoint: http://169.254.170.2
 * 
 * 
 * If environment variable "AWS_CONTAINER_CREDENTIALS_FULL_URI" is
 * set it is used to hit a metadata service at that URI.  
Optionally an authorization token can be included
 * in the "Authorization" header of the request by setting the 
"AWS_CONTAINER_AUTHORIZATION_TOKEN" environment variable.
 * 
 * 
 * If neither of the above environment variables are specified 
credentials are attempted to be loaded from Amazon EC2
 * Instance Metadata Service using the {@link 
InstanceProfileCredentialsProvider}.
 * 
 * 
 */
{code}

> make last AWS credential provider in default auth chain 
> EC2ContainerCredentialsProviderWrapper
> --
>
> Key: HADOOP-16470
> URL: https://issues.apache.org/jira/browse/HADOOP-16470
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> There's a new credential provider in the AWS SDK, 
> {{com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper}}
>  this is designed to work within AWS containers as well as EC2 VMs, using env 
> vars to find container credentials first, falling back to the IAM metadata 
> service. This way, when deployed in a container or EC2 VM, it will always 
> find the session credentials for the deployed IAM Role



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >