[jira] [Commented] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-06-18 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867206#comment-16867206
 ] 

Akira Ajisaka commented on HADOOP-15958:


The 006 patch is ready for review.
Updated the bundled dependencies list
* https://gist.github.com/aajisaka/cc43e3d8b9f8047dab46f196ad5bfdde
* 
https://cwiki.apache.org/confluence/display/HADOOP/Bundled+dependencies?src=jira

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-003.patch, 
> HADOOP-15958-004.patch, HADOOP-15958-wip.001.patch, HADOOP-15958.005.patch, 
> HADOOP-15958.006.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-06-18 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15958:
---
Attachment: HADOOP-15958.006.patch

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-003.patch, 
> HADOOP-15958-004.patch, HADOOP-15958-wip.001.patch, HADOOP-15958.005.patch, 
> HADOOP-15958.006.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16381) The JSON License is included in binary tarball via azure-documentdb:1.16.2

2019-06-18 Thread Sushil Ks (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushil Ks reassigned HADOOP-16381:
--

Assignee: Sushil Ks

> The JSON License is included in binary tarball via azure-documentdb:1.16.2
> --
>
> Key: HADOOP-16381
> URL: https://issues.apache.org/jira/browse/HADOOP-16381
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Sushil Ks
>Priority: Blocker
>
> {noformat}
> $ mvn dependency:tree
> (snip)
> [INFO] +- com.microsoft.azure:azure-documentdb:jar:1.16.2:compile
> [INFO] |  +- com.fasterxml.uuid:java-uuid-generator:jar:3.1.4:compile
> [INFO] |  +- org.json:json:jar:20140107:compile
> [INFO] |  +- org.apache.httpcomponents:httpcore:jar:4.4.10:compile
> [INFO] |  \- joda-time:joda-time:jar:2.9.9:compile
> {noformat}
> org.json:json is JSON Licensed and it must be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-06-18 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867198#comment-16867198
 ] 

Akira Ajisaka edited comment on HADOOP-15958 at 6/19/19 4:12 AM:
-

Thanks [~jojochuang] for the comment. Rebased the patch.

When rebasing the patch, I found The JSON License is now included and must be 
removed (HADOOP-16381).


was (Author: ajisakaa):
Thanks [~jojochuang] for the comment. Rebased the patch.

When rebasing the patch, I found The JSON License is now included. 
(HADOOP-16381)

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-003.patch, 
> HADOOP-15958-004.patch, HADOOP-15958-wip.001.patch, HADOOP-15958.005.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-06-18 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867198#comment-16867198
 ] 

Akira Ajisaka commented on HADOOP-15958:


Thanks [~jojochuang] for the comment. Rebased the patch.

When rebasing the patch, I found The JSON License is now included. 
(HADOOP-16381)

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-003.patch, 
> HADOOP-15958-004.patch, HADOOP-15958-wip.001.patch, HADOOP-15958.005.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-06-18 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15958:
---
Attachment: HADOOP-15958.005.patch

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-003.patch, 
> HADOOP-15958-004.patch, HADOOP-15958-wip.001.patch, HADOOP-15958.005.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16381) The JSON License is included in binary tarball via azure-documentdb:1.16.2

2019-06-18 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16381:
--

 Summary: The JSON License is included in binary tarball via 
azure-documentdb:1.16.2
 Key: HADOOP-16381
 URL: https://issues.apache.org/jira/browse/HADOOP-16381
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka


{noformat}
$ mvn dependency:tree
(snip)
[INFO] +- com.microsoft.azure:azure-documentdb:jar:1.16.2:compile
[INFO] |  +- com.fasterxml.uuid:java-uuid-generator:jar:3.1.4:compile
[INFO] |  +- org.json:json:jar:20140107:compile
[INFO] |  +- org.apache.httpcomponents:httpcore:jar:4.4.10:compile
[INFO] |  \- joda-time:joda-time:jar:2.9.9:compile
{noformat}
org.json:json is JSON Licensed and it must be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15976) NameNode Performance degradation When Single LdapServer become a bottleneck in Ldap-based mapping module

2019-06-18 Thread Shen Yinjie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867191#comment-16867191
 ] 

Shen Yinjie commented on HADOOP-15976:
--

[~jojochuang] ,[~fengyongshe] can not spare time currently and handed over this 
issue to me offline. I will create  a pr soon.

> NameNode Performance degradation When Single LdapServer become a  bottleneck 
> in Ldap-based mapping module 
> --
>
> Key: HADOOP-15976
> URL: https://issues.apache.org/jira/browse/HADOOP-15976
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.1
>Reporter: fengyongshe
>Assignee: fengyongshe
>Priority: Major
> Attachments: HADOOP-15976.patch, image003(12-05-1(12-05-10-36-26).jpg
>
>
> 2000+ nodes cluster, We use OpenLdap to manager users and groups . when 
> LdapGroupsMapping used , Group look-up cause segment fault include NameNode 
> Performance degradation & name node crashes . 
> WARN security.Groups: Potential performance problem:
>  getGroups(user=) took 46817 milliseconds.
>  INFO namenode.FSNamesysatem(FSNamesystemLoclk.java:writeUnlock(252))- 
> FSNameSystem write lock held for 46817 ms via java.lang.thread.getStackTrace
> We Found the Ldap Server become the bottleneck for NN operations, Single Ldap 
> Server  only support hundred request per seconds
> ps. The Server was running nslcd 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #987: HDDS-1685. Recon: Add support for 'start' query param to containers…

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #987: HDDS-1685. Recon: Add support for 'start' 
query param to containers…
URL: https://github.com/apache/hadoop/pull/987#issuecomment-503360653
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 507 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 486 | trunk passed |
   | +1 | compile | 259 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 856 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | trunk passed |
   | 0 | spotbugs | 310 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 498 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 439 | the patch passed |
   | +1 | compile | 263 | the patch passed |
   | +1 | javac | 263 | the patch passed |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 746 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 183 | the patch passed |
   | -1 | findbugs | 316 | hadoop-ozone generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 164 | hadoop-hdds in the patch failed. |
   | -1 | unit | 963 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 6420 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Possible null pointer dereference of keyValue in 
org.apache.hadoop.ozone.recon.spi.impl.ContainerDBServiceProviderImpl.getContainers(int,
 long)  Dereferenced at ContainerDBServiceProviderImpl.java:keyValue in 
org.apache.hadoop.ozone.recon.spi.impl.ContainerDBServiceProviderImpl.getContainers(int,
 long)  Dereferenced at ContainerDBServiceProviderImpl.java:[line 231] |
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/987 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d432f3d00b1a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 37bd5bb |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/1/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/1/testReport/ |
   | Max. process+thread count | 4557 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #930: HDDS-1651. Create a http.policy config for Ozone

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #930: HDDS-1651. Create a http.policy config 
for Ozone
URL: https://github.com/apache/hadoop/pull/930#issuecomment-503359575
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 519 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 67 | Maven dependency ordering for branch |
   | +1 | mvninstall | 498 | trunk passed |
   | +1 | compile | 263 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 863 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 313 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 505 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 34 | Maven dependency ordering for patch |
   | +1 | mvninstall | 439 | the patch passed |
   | +1 | compile | 273 | the patch passed |
   | +1 | javac | 273 | the patch passed |
   | -0 | checkstyle | 39 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | +1 | findbugs | 524 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 148 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1092 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6616 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-930/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/930 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux a041c46a08f4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 37bd5bb |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-930/4/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-930/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-930/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-930/4/testReport/ |
   | Max. process+thread count | 5006 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-930/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #951: HADOOP-15183. S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#issuecomment-503358538
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 34 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1024 | trunk passed |
   | +1 | compile | 1071 | trunk passed |
   | +1 | checkstyle | 144 | trunk passed |
   | +1 | mvnsite | 130 | trunk passed |
   | +1 | shadedclient | 1010 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 100 | trunk passed |
   | 0 | spotbugs | 67 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 182 | trunk passed |
   | -0 | patch | 107 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 79 | the patch passed |
   | +1 | compile | 1024 | the patch passed |
   | +1 | javac | 1024 | the patch passed |
   | -0 | checkstyle | 140 | root: The patch generated 19 new + 107 unchanged - 
4 fixed = 126 total (was 111) |
   | +1 | mvnsite | 126 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 688 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 55 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | +1 | findbugs | 206 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 519 | hadoop-common in the patch passed. |
   | +1 | unit | 292 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 7025 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/951 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux db3ed9314a41 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 37bd5bb |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/10/artifact/out/diff-checkstyle-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/10/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/10/testReport/ |
   | Max. process+thread count | 1413 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ edited a comment on issue #988: HADOOP-16376. ABFS: Override access() to no-op.

2019-06-18 Thread GitBox
DadanielZ edited a comment on issue #988: HADOOP-16376. ABFS: Override access() 
to no-op.
URL: https://github.com/apache/hadoop/pull/988#issuecomment-503355935
 
 
   @steveloughran 
   cherry-picked from trunk
   All tests passed for my west us account:
   non-xns account
   Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 383, Failures: 0, Errors: 0, Skipped: 207
   Tests run: 168, Failures: 0, Errors: 0, Skipped: 15
   
   xns account
   Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 383, Failures: 0, Errors: 0, Skipped: 23
   Tests run: 168, Failures: 0, Errors: 0, Skipped: 21
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on issue #988: HADOOP-16376. ABFS: Override access() to no-op.

2019-06-18 Thread GitBox
DadanielZ commented on issue #988: HADOOP-16376. ABFS: Override access() to 
no-op.
URL: https://github.com/apache/hadoop/pull/988#issuecomment-503355935
 
 
   @steveloughran 
   cherry picked from trunk
   All tests passed for my west us account:
   non-xns account
   Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 383, Failures: 0, Errors: 0, Skipped: 207
   Tests run: 168, Failures: 0, Errors: 0, Skipped: 15
   
   xns account
   Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 383, Failures: 0, Errors: 0, Skipped: 23
   Tests run: 168, Failures: 0, Errors: 0, Skipped: 21
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ closed pull request #971: HADOOP-16376: Override access() to no-up

2019-06-18 Thread GitBox
DadanielZ closed pull request #971: HADOOP-16376: Override access() to no-up
URL: https://github.com/apache/hadoop/pull/971
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on issue #971: HADOOP-16376: Override access() to no-up

2019-06-18 Thread GitBox
DadanielZ commented on issue #971: HADOOP-16376: Override access() to no-up
URL: https://github.com/apache/hadoop/pull/971#issuecomment-503349907
 
 
   Thanks for the review, closing this pr as it has been committed into trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ opened a new pull request #988: HADOOP-16376. ABFS: Override access() to no-op.

2019-06-18 Thread GitBox
DadanielZ opened a new pull request #988: HADOOP-16376. ABFS: Override access() 
to no-op.
URL: https://github.com/apache/hadoop/pull/988
 
 
   Contributed by Da Zhou.
   
   Change-Id: Ia0024bba32250189a87eb6247808b2473c331ed0


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #965: HDDS-1684. OM should create Ratis related dirs only if ratis is enabled

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #965: HDDS-1684. OM should create Ratis related 
dirs only if ratis is enabled
URL: https://github.com/apache/hadoop/pull/965#issuecomment-503344804
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 13 | https://github.com/apache/hadoop/pull/965 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/965 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-965/3/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru merged pull request #965: HDDS-1684. OM should create Ratis related dirs only if ratis is enabled

2019-06-18 Thread GitBox
hanishakoneru merged pull request #965: HDDS-1684. OM should create Ratis 
related dirs only if ratis is enabled
URL: https://github.com/apache/hadoop/pull/965
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #965: HDDS-1684. OM should create Ratis related dirs only if ratis is enabled

2019-06-18 Thread GitBox
hanishakoneru commented on issue #965: HDDS-1684. OM should create Ratis 
related dirs only if ratis is enabled
URL: https://github.com/apache/hadoop/pull/965#issuecomment-503344562
 
 
   Test failures are unrelated and fixed 1 checkstyle issue (unused import). 
Will merge the PR. Thanks for the reviews @arp7 and @bharatviswa504 .


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #987: HDDS-1685. Recon: Add support for 'start' query param to containers…

2019-06-18 Thread GitBox
vivekratnavel commented on issue #987: HDDS-1685. Recon: Add support for 
'start' query param to containers…
URL: https://github.com/apache/hadoop/pull/987#issuecomment-503339287
 
 
   @swagle @avijayanhwx @bharatviswa504 Please review this when you find time


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #987: HDDS-1685. Recon: Add support for 'start' query param to containers…

2019-06-18 Thread GitBox
vivekratnavel commented on issue #987: HDDS-1685. Recon: Add support for 
'start' query param to containers…
URL: https://github.com/apache/hadoop/pull/987#issuecomment-503339320
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel opened a new pull request #987: HDDS-1685. Recon: Add support for 'start' query param to containers…

2019-06-18 Thread GitBox
vivekratnavel opened a new pull request #987: HDDS-1685. Recon: Add support for 
'start' query param to containers…
URL: https://github.com/apache/hadoop/pull/987
 
 
   …and containers/{id} endpoints
   
   This PR adds support to "start" query param to seek to the given key in 
RocksDB for containers and containers/{id} endpoints.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867095#comment-16867095
 ] 

Steve Loughran commented on HADOOP-16377:
-

org.slfj imports shouldn't be added to the same block as the org.apache ones. 
The order is nominally

java.*
--
non-org.apache
--
org.apache.*
--
static

I know we don't maintain this well, and once things are in we don't move them 
about for fear of creating merge problems -but lets try to do our best on new 
imports.

Other than that, looks ok. 

In {{FileSystem}}, we created a private Logger instance , {{LOGGER}} to move to 
SLF4J while leaving the public one alone (HADOOP-13605). If the public LOG goes 
to SLF4J then we could combine them -move the LOGGER refs over to LOG.

We could just delete the LOG entry and force all bits of code referring to it 
to move to their own private log. As I recall, the reasons to not do this were
* we didn't know what external apps were using it (i.e. was it used by 
subclasses)
* we were worried about whether people expected our internal uses to be 
controlled by the o.a.fs.FileSystem log settings

Given this breaks compatibility with any external uses. how about we remove and 
force our own code to have private logs

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16377-001.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867085#comment-16867085
 ] 

Hadoop QA commented on HADOOP-16350:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-16350 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16350 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12972143/HADOOP-16350.01.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16329/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.00.patch, HADOOP-16350.01.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> 

[GitHub] [hadoop] hadoop-yetus commented on issue #984: HDDS-1674 Make ScmBlockLocationProtocol message type based

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #984: HDDS-1674 Make ScmBlockLocationProtocol 
message type based
URL: https://github.com/apache/hadoop/pull/984#issuecomment-503330311
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 520 | trunk passed |
   | +1 | compile | 289 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 958 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 344 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 536 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 473 | the patch passed |
   | +1 | compile | 294 | the patch passed |
   | +1 | cc | 294 | the patch passed |
   | +1 | javac | 294 | the patch passed |
   | +1 | checkstyle | 86 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 749 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | the patch passed |
   | +1 | findbugs | 552 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 184 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1427 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 6757 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/984 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 7aea99f99eaf 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 81ec909 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/3/testReport/ |
   | Max. process+thread count | 5406 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-18 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-16350:

Attachment: HADOOP-16350.01.patch

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.00.patch, HADOOP-16350.01.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 556079 for gss2002 on ha-hdfs:tech
> 19/05/29 14:06:10 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: java.net.NoRouteToHostException: No route to host (Host 
> unreachable)
> at 
> 

[jira] [Updated] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-18 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-16350:

Status: Patch Available  (was: Open)

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.2, 2.7.6, 3.0.0, 2.8.3
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.00.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 556079 for gss2002 on ha-hdfs:tech
> 19/05/29 14:06:10 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: java.net.NoRouteToHostException: No route to host (Host 
> unreachable)
> at 
> 

[jira] [Updated] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-18 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-16350:

Attachment: HADOOP-16350.00.patch

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.00.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 556079 for gss2002 on ha-hdfs:tech
> 19/05/29 14:06:10 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: java.net.NoRouteToHostException: No route to host (Host 
> unreachable)
> at 
> 

[GitHub] [hadoop] anuengineer merged pull request #982: HDDS-1702. Optimize Ozone Recon build time

2019-06-18 Thread GitBox
anuengineer merged pull request #982: HDDS-1702. Optimize Ozone Recon build time
URL: https://github.com/apache/hadoop/pull/982
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #986: [HDDS-1690] ContainerController should provide a way to retrieve cont…

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #986: [HDDS-1690] ContainerController should 
provide a way to retrieve cont…
URL: https://github.com/apache/hadoop/pull/986#issuecomment-503309105
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 537 | trunk passed |
   | +1 | compile | 297 | trunk passed |
   | +1 | checkstyle | 89 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 895 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 183 | trunk passed |
   | 0 | spotbugs | 333 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 518 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 468 | the patch passed |
   | +1 | compile | 303 | the patch passed |
   | +1 | javac | 303 | the patch passed |
   | -0 | checkstyle | 47 | hadoop-hdds: The patch generated 6 new + 0 
unchanged - 0 fixed = 6 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 673 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | the patch passed |
   | +1 | findbugs | 543 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 151 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1008 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 60 | The patch does not generate ASF License warnings. |
   | | | 6239 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-986/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/986 |
   | JIRA Issue | HDDS-1690 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8657ba3c158f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 81ec909 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-986/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-986/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-986/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-986/1/testReport/ |
   | Max. process+thread count | 5296 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-986/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on issue #983: HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-18 Thread GitBox
sahilTakiar commented on issue #983: HADOOP-16379: S3AInputStream#unbuffer 
should merge input stream stats into fs-wide stats
URL: https://github.com/apache/hadoop/pull/983#issuecomment-503303248
 
 
   @steveloughran addressed your comments. Hadoop QA looks happy as well.
   
   Re-ran the S3A tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #986: [HDDS-1690] ContainerController should provide a way to retrieve cont…

2019-06-18 Thread GitBox
bharatviswa504 commented on a change in pull request #986: [HDDS-1690] 
ContainerController should provide a way to retrieve cont…
URL: https://github.com/apache/hadoop/pull/986#discussion_r295013673
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerController.java
 ##
 @@ -140,4 +141,8 @@ private Handler getHandler(final Container container) {
   public Iterator getContainers() {
 return containerSet.getContainerIterator();
   }
+
+  public Iterator getContainersForVolume(String volumeUuid) {
 
 Review comment:
   Add Javadoc for this method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #986: [HDDS-1690] ContainerController should provide a way to retrieve cont…

2019-06-18 Thread GitBox
bharatviswa504 commented on a change in pull request #986: [HDDS-1690] 
ContainerController should provide a way to retrieve cont…
URL: https://github.com/apache/hadoop/pull/986#discussion_r295012141
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
 ##
 @@ -128,6 +128,14 @@ public int containerCount() {
 return containerMap.values().iterator();
   }
 
+  public Iterator getContainerIteratorForVolume(String volumeUuid) {
 
 Review comment:
   Minor: Add javadoc for this method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #951: HADOOP-15183. S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#issuecomment-503284327
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 513 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 3 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 34 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1021 | trunk passed |
   | +1 | compile | 1104 | trunk passed |
   | +1 | checkstyle | 141 | trunk passed |
   | +1 | mvnsite | 117 | trunk passed |
   | +1 | shadedclient | 970 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 91 | trunk passed |
   | 0 | spotbugs | 63 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 185 | trunk passed |
   | -0 | patch | 95 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 78 | the patch passed |
   | +1 | compile | 1065 | the patch passed |
   | +1 | javac | 1065 | the patch passed |
   | -0 | checkstyle | 144 | root: The patch generated 18 new + 107 unchanged - 
4 fixed = 125 total (was 111) |
   | +1 | mvnsite | 108 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 657 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 27 | hadoop-tools_hadoop-aws generated 4 new + 1 unchanged 
- 0 fixed = 5 total (was 1) |
   | +1 | findbugs | 208 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 543 | hadoop-common in the patch passed. |
   | +1 | unit | 285 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7472 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/951 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 9e5c25f7ae9b 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b14f056 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/9/artifact/out/diff-checkstyle-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/9/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/9/testReport/ |
   | Max. process+thread count | 1393 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-951/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-16363) S3Guard DDB store prune() doesn't translate AWS exceptions to IOEs

2019-06-18 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16363:

Comment: was deleted

(was: Fixed in HADOOP-15183 with a new exception {{TableDeleteTimeoutException 
extends PathIOException}}; IllegalArgumentException on teardown is mapped to 
this. 

Just realised I need to catch this in the S3GuardTool.Destroy as it's not a 
failure, I'm going to downgrade to warn. Scripts will still succeed, they just 
may fail if there is a new init immediately after -and that failure will be 
caught and reported.)

> S3Guard DDB store prune() doesn't translate AWS exceptions to IOEs
> --
>
> Key: HADOOP-16363
> URL: https://issues.apache.org/jira/browse/HADOOP-16363
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Fixing in HADOOP-15183: if you call prune() against a nonexist DDB table, the 
> exception isn't being translated into an IOE.
> This is interesting as the codepath is going through retry(), it's just that 
> where the IO is taking place is happening inside the iterator, and we don't 
> have checks there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16364) S3Guard table destroy to map IllegalArgumentExceptions to IOEs

2019-06-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866982#comment-16866982
 ] 

Steve Loughran commented on HADOOP-16364:
-

Fixed in HADOOP-15183 with a new exception {{TableDeleteTimeoutException 
extends PathIOException}}; IllegalArgumentException on teardown is mapped to 
this. 

Just realised I need to catch this in the S3GuardTool.Destroy as it's not a 
failure, I'm going to downgrade to warn. Scripts will still succeed, they just 
may fail if there is a new init immediately after -and that failure will be 
caught and reported.

> S3Guard table destroy to map IllegalArgumentExceptions to IOEs
> --
>
> Key: HADOOP-16364
> URL: https://issues.apache.org/jira/browse/HADOOP-16364
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> in the DDB metastore.destroy, waitForTableDeletion() is called to await the 
> destruction. But sometimes it takes longer than the allocated time, after 
> which an IllegalArgumentException is raised.
> Catch this and convert to a specific IOE which can be swallowed as desired



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hgadre opened a new pull request #986: [HDDS-1690] ContainerController should provide a way to retrieve cont…

2019-06-18 Thread GitBox
hgadre opened a new pull request #986: [HDDS-1690] ContainerController should 
provide a way to retrieve cont…
URL: https://github.com/apache/hadoop/pull/986
 
 
   …ainers per volume
   
   Added an API in ContainerSet (and ContainerController) which exposes
   an iterator of containers for a specified volume identifier.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #852: HDDS-1454. GC other system pause events can trigger pipeline destroy for all the nodes in the cluster. Contributed by Supratim Deka

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #852: HDDS-1454. GC other system pause events 
can trigger pipeline destroy for all the nodes in the cluster. Contributed by 
Supratim Deka
URL: https://github.com/apache/hadoop/pull/852#issuecomment-503267973
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 49 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 565 | trunk passed |
   | +1 | compile | 276 | trunk passed |
   | +1 | checkstyle | 91 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 888 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 333 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 525 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 477 | the patch passed |
   | +1 | compile | 310 | the patch passed |
   | +1 | javac | 310 | the patch passed |
   | +1 | checkstyle | 96 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 668 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | +1 | findbugs | 545 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 166 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1166 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 60 | The patch does not generate ASF License warnings. |
   | | | 6381 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-852/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/852 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5f0559c1093a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3ab77d9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-852/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-852/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-852/3/testReport/ |
   | Max. process+thread count | 5401 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-852/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-18 Thread GitBox
bharatviswa504 edited a comment on issue #954: HDDS-1670. Add limit support to 
/api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#issuecomment-503197062
 
 
   Test failures look's not related to this patch.
   I will commit this to the trunk. Thank You @vivekratnavel for the 
contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16379) S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866932#comment-16866932
 ] 

Hadoop QA commented on HADOOP-16379:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
57s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
43s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/2/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/983 |
| JIRA Issue | HADOOP-16379 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 4f12c5432e99 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 3c1a1ce |
| Default Java | 1.8.0_212 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/2/testReport/ |
| Max. process+thread count | 320 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/2/console |
| versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
| 

[GitHub] [hadoop] hadoop-yetus commented on issue #983: HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #983: HADOOP-16379: S3AInputStream#unbuffer 
should merge input stream stats into fs-wide stats
URL: https://github.com/apache/hadoop/pull/983#issuecomment-503262336
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1116 | trunk passed |
   | +1 | compile | 34 | trunk passed |
   | +1 | checkstyle | 22 | trunk passed |
   | +1 | mvnsite | 37 | trunk passed |
   | +1 | shadedclient | 766 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | trunk passed |
   | 0 | spotbugs | 57 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 55 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | +1 | checkstyle | 17 | the patch passed |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 802 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 22 | the patch passed |
   | +1 | findbugs | 70 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 283 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3530 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/983 |
   | JIRA Issue | HADOOP-16379 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4f12c5432e99 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3c1a1ce |
   | Default Java | 1.8.0_212 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/2/testReport/ |
   | Max. process+thread count | 320 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-983/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #980: Update RocksDB version to 6.0.1

2019-06-18 Thread GitBox
bharatviswa504 commented on issue #980: Update RocksDB version to 6.0.1
URL: https://github.com/apache/hadoop/pull/980#issuecomment-503250956
 
 
   I found the Jira for this.
   Thank You @anuengineer for the review.
   I have committed this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #980: Update RocksDB version to 6.0.1

2019-06-18 Thread GitBox
bharatviswa504 merged pull request #980: Update RocksDB version to 6.0.1
URL: https://github.com/apache/hadoop/pull/980
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16340) ABFS driver continues to retry on IOException responses from REST operations

2019-06-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866895#comment-16866895
 ] 

Steve Loughran commented on HADOOP-16340:
-

[~rlevas] -you can now have hadoop-common JIRAs assigned to you

> ABFS driver continues to retry on IOException responses from REST operations
> 
>
> Key: HADOOP-16340
> URL: https://issues.apache.org/jira/browse/HADOOP-16340
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Robert Levas
>Assignee: Robert Levas
>Priority: Major
>
> ABFS driver continues to retry (until retry count is exhausted) upon 
> IOException responses from REST operations.  
> In the exception hander for IOExceptions at 
> [https://github.com/apache/hadoop/blob/65f60e56b082faf92e1cd3daee2569d8fc669c67/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java#L174-L197],
>  there is no way exit out of the retry loop by re-throwing an exception 
> unless one of the following conditions have been met:
>  * The retry limit was hit
>  * An HttpException was encountered
> From an 
> {{org.apache.hadoop.fs.azurebfs.extensions.CustomTokenProviderAdaptee}} or 
> {{org.apache.hadoop.fs.azurebfs.extensions.CustomDelegationTokenManager}} 
> implementation, there is no way to create an 
> {{org.apache.hadoop.fs.azurebfs.oauth2.AzureADAuthenticator.HttpException}} 
> since the constructor is package private. 
> Either the exception handler needs to generic handle exceptions like 
> {{java.nio.file.AccessDeniedException}} and 
> {{java.io.FileNotFoundException}}, or the access to 
> {{org.apache.hadoop.fs.azurebfs.oauth2.AzureADAuthenticator.HttpException}} 
> needs to be set to that custom implementations can use it. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #984: HDDS-1674 Make ScmBlockLocationProtocol message type based

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #984: HDDS-1674 Make ScmBlockLocationProtocol 
message type based
URL: https://github.com/apache/hadoop/pull/984#issuecomment-503230601
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 62 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 659 | trunk passed |
   | +1 | compile | 361 | trunk passed |
   | +1 | checkstyle | 93 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1052 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 214 | trunk passed |
   | 0 | spotbugs | 418 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 660 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 654 | the patch passed |
   | +1 | compile | 396 | the patch passed |
   | +1 | cc | 396 | the patch passed |
   | +1 | javac | 396 | the patch passed |
   | -0 | checkstyle | 51 | hadoop-hdds: The patch generated 20 new + 0 
unchanged - 0 fixed = 20 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 758 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | the patch passed |
   | +1 | findbugs | 576 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 193 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1812 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 62 | The patch does not generate ASF License warnings. |
   | | | 8027 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.web.client.TestVolume |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/984 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 4e5a8f056dc9 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 335c1c9 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/2/testReport/ |
   | Max. process+thread count | 2491 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866852#comment-16866852
 ] 

Hadoop QA commented on HADOOP-16377:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 22 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-submarine-tony-runtime in trunk failed. {color} 
|
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-submarine-tony-runtime in trunk failed. {color} 
|
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-submarine-tony-runtime in trunk failed. {color} 
|
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-submarine-tony-runtime in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 33s{color} 
| {color:red} root generated 5 new + 1451 unchanged - 15 fixed = 1456 total 
(was 1466) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 36s{color} | {color:orange} root: The patch generated 12 new + 1081 
unchanged - 30 fixed = 1093 total (was ) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-submarine-tony-runtime in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
28s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-submarine-tony-runtime in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-submarine-tony-runtime in the patch failed. 
{color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
33s{color} | 

[GitHub] [hadoop] supratimdeka commented on a change in pull request #852: HDDS-1454. GC other system pause events can trigger pipeline destroy for all the nodes in the cluster. Contributed by Suprati

2019-06-18 Thread GitBox
supratimdeka commented on a change in pull request #852: HDDS-1454. GC other 
system pause events can trigger pipeline destroy for all the nodes in the 
cluster. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/852#discussion_r294930618
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java
 ##
 @@ -339,6 +341,96 @@ public void testScmDetectStaleAndDeadNode()
 }
   }
 
+  /**
+   * Simulate a JVM Pause by pausing the health check process
+   * Ensure that none of the nodes with heartbeats become Dead or Stale.
+   * @throws IOException
+   * @throws InterruptedException
+   * @throws AuthenticationException
+   */
+  @Test
+  public void testScmHandleJvmPause()
+  throws IOException, InterruptedException, AuthenticationException {
+final int healthCheckInterval = 200; // milliseconds
+final int heartbeatInterval = 1; // seconds
+final int staleNodeInterval = 3; // seconds
+final int deadNodeInterval = 6; // seconds
+ScheduledFuture schedFuture;
 
 Review comment:
   do you recommend changing the name? sorry, not sure I understood the typo.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] supratimdeka commented on a change in pull request #852: HDDS-1454. GC other system pause events can trigger pipeline destroy for all the nodes in the cluster. Contributed by Suprati

2019-06-18 Thread GitBox
supratimdeka commented on a change in pull request #852: HDDS-1454. GC other 
system pause events can trigger pipeline destroy for all the nodes in the 
cluster. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/852#discussion_r294930277
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -464,6 +487,44 @@ public void setContainers(UUID uuid, Set 
containerIds)
   @Override
   public void run() {
 
+if (shouldSkipCheck()) {
+  skippedHealthChecks++;
+  LOG.info("Detected long delay in scheduling HB processing thread. "
+  + "Skipping heartbeat checks for one iteration.");
+} else {
+  checkNodesHealth();
+}
+
+// we purposefully make this non-deterministic. Instead of using a
+// scheduleAtFixedFrequency  we will just go to sleep
+// and wake up at the next rendezvous point, which is currentTime +
+// heartbeatCheckerIntervalMs. This leads to the issue that we are now
+// heart beating not at a fixed cadence, but clock tick + time taken to
+// work.
+//
+// This time taken to work can skew the heartbeat processor thread.
+// The reason why we don't care is because of the following reasons.
+//
+// 1. checkerInterval is general many magnitudes faster than datanode HB
+// frequency.
+//
+// 2. if we have too much nodes, the SCM would be doing only HB
+// processing, this could lead to SCM's CPU starvation. With this
+// approach we always guarantee that  HB thread sleeps for a little while.
+//
+// 3. It is possible that we will never finish processing the HB's in the
+// thread. But that means we have a mis-configured system. We will warn
+// the users by logging that information.
+//
+// 4. And the most important reason, heartbeats are not blocked even if
+// this thread does not run, they will go into the processing queue.
+scheduleNextHealthCheck();
+
+return;
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] supratimdeka commented on a change in pull request #852: HDDS-1454. GC other system pause events can trigger pipeline destroy for all the nodes in the cluster. Contributed by Suprati

2019-06-18 Thread GitBox
supratimdeka commented on a change in pull request #852: HDDS-1454. GC other 
system pause events can trigger pipeline destroy for all the nodes in the 
cluster. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/852#discussion_r294930181
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -558,41 +619,40 @@ public void run() {
   heartbeatCheckerIntervalMs);
 }
 
-// we purposefully make this non-deterministic. Instead of using a
-// scheduleAtFixedFrequency  we will just go to sleep
-// and wake up at the next rendezvous point, which is currentTime +
-// heartbeatCheckerIntervalMs. This leads to the issue that we are now
-// heart beating not at a fixed cadence, but clock tick + time taken to
-// work.
-//
-// This time taken to work can skew the heartbeat processor thread.
-// The reason why we don't care is because of the following reasons.
-//
-// 1. checkerInterval is general many magnitudes faster than datanode HB
-// frequency.
-//
-// 2. if we have too much nodes, the SCM would be doing only HB
-// processing, this could lead to SCM's CPU starvation. With this
-// approach we always guarantee that  HB thread sleeps for a little while.
-//
-// 3. It is possible that we will never finish processing the HB's in the
-// thread. But that means we have a mis-configured system. We will warn
-// the users by logging that information.
-//
-// 4. And the most important reason, heartbeats are not blocked even if
-// this thread does not run, they will go into the processing queue.
+  }
+
+  private void scheduleNextHealthCheck() {
 
 if (!Thread.currentThread().isInterrupted() &&
 !executorService.isShutdown()) {
   //BUGBUG: The return future needs to checked here to make sure the
   // exceptions are handled correctly.
-  executorService.schedule(this, heartbeatCheckerIntervalMs,
-  TimeUnit.MILLISECONDS);
+  healthCheckFuture = executorService.schedule(this,
+  heartbeatCheckerIntervalMs, TimeUnit.MILLISECONDS);
 } else {
-  LOG.info("Current Thread is interrupted, shutting down HB processing " +
+  LOG.warn("Current Thread is interrupted, shutting down HB processing " +
   "thread for Node Manager.");
 }
 
+lastHealthCheck = Time.monotonicNow();
+  }
+
+  /**
+   * if the time since last check exceeds the stale|dead node interval, skip.
+   * such long delays might be caused by a JVM pause. SCM cannot make reliable
+   * conclusions about datanode health in such situations.
+   * @return : true indicates skip HB checks
+   */
+  private boolean shouldSkipCheck() {
+
+long currentTime = Time.monotonicNow();
+long minInterval = Math.min(staleNodeIntervalMs, deadNodeIntervalMs);
+
+if ((currentTime - lastHealthCheck) >= minInterval) {
+  return true;
+}
+
+return false;
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #985: HADOOP-16380. ITestS3AContractRootDir failing on trunk

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #985: HADOOP-16380. ITestS3AContractRootDir 
failing on trunk
URL: https://github.com/apache/hadoop/pull/985#issuecomment-503221021
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1094 | trunk passed |
   | +1 | compile | 1144 | trunk passed |
   | +1 | checkstyle | 138 | trunk passed |
   | +1 | mvnsite | 118 | trunk passed |
   | +1 | shadedclient | 950 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 86 | trunk passed |
   | 0 | spotbugs | 62 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 178 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 77 | the patch passed |
   | +1 | compile | 1094 | the patch passed |
   | +1 | javac | 1094 | the patch passed |
   | +1 | checkstyle | 142 | the patch passed |
   | +1 | mvnsite | 113 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 668 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 85 | the patch passed |
   | +1 | findbugs | 203 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 549 | hadoop-common in the patch failed. |
   | -1 | unit | 3656 | hadoop-aws in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 10425 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.util.TestBasicDiskValidator |
   |   | hadoop.util.TestReadWriteDiskValidator |
   |   | hadoop.util.TestDiskChecker |
   |   | hadoop.fs.s3a.commit.staging.TestStagingCommitter |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-985/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/985 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a9542cde05d1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 335c1c9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-985/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-985/1/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-985/1/testReport/ |
   | Max. process+thread count | 1430 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-985/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #977: (HDFS-14541)when evictableMmapped or evictable size is zero, do not throw NoSuchE…

2019-06-18 Thread GitBox
goiri commented on a change in pull request #977: (HDFS-14541)when 
evictableMmapped or evictable size is zero, do not throw NoSuchE… 
URL: https://github.com/apache/hadoop/pull/977#discussion_r294922980
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
 ##
 @@ -533,23 +523,15 @@ private void trimEvictionMaps() {
 long now = Time.monotonicNow();
 demoteOldEvictableMmaped(now);
 
-while (true) {
-  long evictableSize = evictable.size();
-  long evictableMmappedSize = evictableMmapped.size();
-  if (evictableSize + evictableMmappedSize <= maxTotalSize) {
-return;
-  }
+while (evictable.size() + evictableMmapped.size() > maxTotalSize) {
   ShortCircuitReplica replica;
-  try {
-if (evictableSize == 0) {
-  replica = (ShortCircuitReplica)evictableMmapped.get(evictableMmapped
-  .firstKey());
-} else {
-  replica = (ShortCircuitReplica)evictable.get(evictable.firstKey());
-}
-  } catch (NoSuchElementException e) {
-break;
+  if (evictable.size() == 0) {
 
 Review comment:
   Use isEmpty() here for clarity?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #980: Update RocksDB version to 6.0.1

2019-06-18 Thread GitBox
bharatviswa504 edited a comment on issue #980: Update RocksDB version to 6.0.1
URL: https://github.com/apache/hadoop/pull/980#issuecomment-503197955
 
 
   Test failures look unrelated to this patch.
   Thank You @avijayanhwx for the contribution. Is there a Jira associated with 
this, so that I can add it during commit?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #980: Update RocksDB version to 6.0.1

2019-06-18 Thread GitBox
bharatviswa504 edited a comment on issue #980: Update RocksDB version to 6.0.1
URL: https://github.com/apache/hadoop/pull/980#issuecomment-503197955
 
 
   Test failures look unrelated to this patch.
   I will commit this to the trunk.
   Thank You @avijayanhwx for the contribution. Is there a Jira associated with 
this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #980: Update RocksDB version to 6.0.1

2019-06-18 Thread GitBox
bharatviswa504 commented on issue #980: Update RocksDB version to 6.0.1
URL: https://github.com/apache/hadoop/pull/980#issuecomment-503197955
 
 
   Test failures look unrelated to this patch.
   I will commit this to the trunk.
   Thank You @avijayanhwx for the contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-18 Thread GitBox
bharatviswa504 edited a comment on issue #954: HDDS-1670. Add limit support to 
/api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#issuecomment-503197062
 
 
   Test failures look related to this patch.
   I will commit this to the trunk. Thank You @vivekratnavel for the 
contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-18 Thread GitBox
bharatviswa504 merged pull request #954: HDDS-1670. Add limit support to 
/api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #954: HDDS-1670. Add limit support to /api/containers and /api/containers/{id} endpoints

2019-06-18 Thread GitBox
bharatviswa504 commented on issue #954: HDDS-1670. Add limit support to 
/api/containers and /api/containers/{id} endpoints
URL: https://github.com/apache/hadoop/pull/954#issuecomment-503197062
 
 
   Test failures look related to this patch.
   I will commit this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #983: HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-18 Thread GitBox
steveloughran commented on issue #983: HADOOP-16379: S3AInputStream#unbuffer 
should merge input stream stats into fs-wide stats
URL: https://github.com/apache/hadoop/pull/983#issuecomment-503175068
 
 
   LGTM, some changes around the testing. Checkstyle has a complaint which 
needs to be addressed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #983: HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-18 Thread GitBox
steveloughran commented on a change in pull request #983: HADOOP-16379: 
S3AInputStream#unbuffer should merge input stream stats into fs-wide stats
URL: https://github.com/apache/hadoop/pull/983#discussion_r294870610
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUnbuffer.java
 ##
 @@ -60,6 +70,66 @@ public void testUnbuffer() throws IOException {
 }
   }
 
+  /**
+   * Test that calling {@link S3AInputStream#unbuffer()} merges a stream's
+   * {@link org.apache.hadoop.fs.s3a.S3AInstrumentation.InputStreamStatistics}
+   * into the {@link S3AFileSystem}'s {@link S3AInstrumentation} instance.
+   */
+  @Test
+  public void testUnbufferStreamStatistics() throws IOException {
+describe("testUnbufferStreamStatistics");
+
+// Create a new S3AFileSystem instance so we can have an independent
+// instance of S3AInstrumentation
+Configuration conf = createConfiguration();
+S3AFileSystem fs = new S3AFileSystem();
+fs.initialize(getFileSystem().getUri(), conf);
+
+// Open file, read half the data, and then call unbuffer
+FSDataInputStream inputStream = null;
+try {
+  inputStream = fs.open(dest);
+
+  // Sanity check to make sure the stream statistics are 0
+  assertEquals(0,
+  fs.getInstrumentation().getCounterValue(STREAM_SEEK_BYTES_READ));
+
+  assertEquals(8, inputStream.read(new byte[8]));
+  inputStream.unbuffer();
+
+  // Validate that calling unbuffer updates the input stream statistics
+  assertEquals(8,
+  fs.getInstrumentation().getCounterValue(STREAM_SEEK_BYTES_READ));
+
+  // Validate that calling unbuffer twice in a row updates the statistics
+  // correctly
+  assertEquals(4, inputStream.read(new byte[4]));
+  inputStream.unbuffer();
+  assertEquals(12,
+  fs.getInstrumentation().getCounterValue(STREAM_SEEK_BYTES_READ));
+} finally {
+  if (inputStream != null) {
+inputStream.close();
+  }
+}
+
+// Validate that closing the file does not further change the statistics
+assertEquals(12,
+  fs.getInstrumentation().getCounterValue(STREAM_SEEK_BYTES_READ));
+
+// Validate that the input stream stats are correct when the file is closed
+assertNotNull(inputStream);
+assertEquals(12,
+((S3AInputStream) inputStream.getWrappedStream())
+.getS3AStreamStatistics().bytesRead);
+  }
+
+  @Override
+  public void teardown() throws Exception {
+getFileSystem().delete(dest, true);
 
 Review comment:
   superclass deleteTestDirInTeardown() should do this. If it doesn't, guard 
the call with checks for filesystem and destpath not being null, so it won't 
hide the stack traces of a failure in setup


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #983: HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-18 Thread GitBox
steveloughran commented on a change in pull request #983: HADOOP-16379: 
S3AInputStream#unbuffer should merge input stream stats into fs-wide stats
URL: https://github.com/apache/hadoop/pull/983#discussion_r294867813
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUnbuffer.java
 ##
 @@ -60,6 +70,66 @@ public void testUnbuffer() throws IOException {
 }
   }
 
+  /**
+   * Test that calling {@link S3AInputStream#unbuffer()} merges a stream's
+   * {@link org.apache.hadoop.fs.s3a.S3AInstrumentation.InputStreamStatistics}
+   * into the {@link S3AFileSystem}'s {@link S3AInstrumentation} instance.
+   */
+  @Test
+  public void testUnbufferStreamStatistics() throws IOException {
+describe("testUnbufferStreamStatistics");
+
+// Create a new S3AFileSystem instance so we can have an independent
+// instance of S3AInstrumentation
+Configuration conf = createConfiguration();
+S3AFileSystem fs = new S3AFileSystem();
+fs.initialize(getFileSystem().getUri(), conf);
 
 Review comment:
   needs to be deleted in a finally clause
   
   Also, I don't think it is needed at all. MetricDiff is designed to do 
asserts over changes in statistics.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #983: HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-18 Thread GitBox
steveloughran commented on a change in pull request #983: HADOOP-16379: 
S3AInputStream#unbuffer should merge input stream stats into fs-wide stats
URL: https://github.com/apache/hadoop/pull/983#discussion_r294867813
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUnbuffer.java
 ##
 @@ -60,6 +70,66 @@ public void testUnbuffer() throws IOException {
 }
   }
 
+  /**
+   * Test that calling {@link S3AInputStream#unbuffer()} merges a stream's
+   * {@link org.apache.hadoop.fs.s3a.S3AInstrumentation.InputStreamStatistics}
+   * into the {@link S3AFileSystem}'s {@link S3AInstrumentation} instance.
+   */
+  @Test
+  public void testUnbufferStreamStatistics() throws IOException {
+describe("testUnbufferStreamStatistics");
+
+// Create a new S3AFileSystem instance so we can have an independent
+// instance of S3AInstrumentation
+Configuration conf = createConfiguration();
+S3AFileSystem fs = new S3AFileSystem();
+fs.initialize(getFileSystem().getUri(), conf);
 
 Review comment:
   needs to be deleted in a finally clause


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #983: HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-18 Thread GitBox
steveloughran commented on a change in pull request #983: HADOOP-16379: 
S3AInputStream#unbuffer should merge input stream stats into fs-wide stats
URL: https://github.com/apache/hadoop/pull/983#discussion_r294867505
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUnbuffer.java
 ##
 @@ -38,15 +41,22 @@
  */
 public class ITestS3AUnbuffer extends AbstractS3ATestBase {
 
-  @Test
-  public void testUnbuffer() throws IOException {
-// Setup test file
-Path dest = path("testUnbuffer");
-describe("testUnbuffer");
+  private Path dest;
+
+  @Override
+  public void setup() throws Exception {
+super.setup();
+dest = path("ITestS3AUnbuffer");
+describe("ITestS3AUnbuffer");
 try (FSDataOutputStream outputStream = getFileSystem().create(dest, true)) 
{
 
 Review comment:
   ContractTestUtils.writeDataset


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #983: HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-18 Thread GitBox
steveloughran commented on a change in pull request #983: HADOOP-16379: 
S3AInputStream#unbuffer should merge input stream stats into fs-wide stats
URL: https://github.com/apache/hadoop/pull/983#discussion_r294866544
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUnbuffer.java
 ##
 @@ -60,6 +70,66 @@ public void testUnbuffer() throws IOException {
 }
   }
 
+  /**
+   * Test that calling {@link S3AInputStream#unbuffer()} merges a stream's
+   * {@link org.apache.hadoop.fs.s3a.S3AInstrumentation.InputStreamStatistics}
+   * into the {@link S3AFileSystem}'s {@link S3AInstrumentation} instance.
+   */
+  @Test
+  public void testUnbufferStreamStatistics() throws IOException {
+describe("testUnbufferStreamStatistics");
+
+// Create a new S3AFileSystem instance so we can have an independent
+// instance of S3AInstrumentation
+Configuration conf = createConfiguration();
+S3AFileSystem fs = new S3AFileSystem();
+fs.initialize(getFileSystem().getUri(), conf);
+
+// Open file, read half the data, and then call unbuffer
+FSDataInputStream inputStream = null;
+try {
+  inputStream = fs.open(dest);
+
+  // Sanity check to make sure the stream statistics are 0
+  assertEquals(0,
+  fs.getInstrumentation().getCounterValue(STREAM_SEEK_BYTES_READ));
+
+  assertEquals(8, inputStream.read(new byte[8]));
 
 Review comment:
   add a string to explain what the mismatch would be on an assert equals 
failure. FWIW, I ~insist on text for every assertions. Doing this first saves 
review cycles.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #983: HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-18 Thread GitBox
steveloughran commented on a change in pull request #983: HADOOP-16379: 
S3AInputStream#unbuffer should merge input stream stats into fs-wide stats
URL: https://github.com/apache/hadoop/pull/983#discussion_r294866058
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUnbuffer.java
 ##
 @@ -60,6 +70,66 @@ public void testUnbuffer() throws IOException {
 }
   }
 
+  /**
+   * Test that calling {@link S3AInputStream#unbuffer()} merges a stream's
+   * {@link org.apache.hadoop.fs.s3a.S3AInstrumentation.InputStreamStatistics}
+   * into the {@link S3AFileSystem}'s {@link S3AInstrumentation} instance.
+   */
+  @Test
+  public void testUnbufferStreamStatistics() throws IOException {
+describe("testUnbufferStreamStatistics");
+
+// Create a new S3AFileSystem instance so we can have an independent
+// instance of S3AInstrumentation
+Configuration conf = createConfiguration();
+S3AFileSystem fs = new S3AFileSystem();
+fs.initialize(getFileSystem().getUri(), conf);
+
+// Open file, read half the data, and then call unbuffer
+FSDataInputStream inputStream = null;
+try {
+  inputStream = fs.open(dest);
+
+  // Sanity check to make sure the stream statistics are 0
+  assertEquals(0,
 
 Review comment:
   use org.apache.hadoop.fs.s3a.S3ATestUtils.MetricDiff for measuring and 
asserting on state of metrics, as it generates meaningful error messages on 
assert failures.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #983: HADOOP-16379: S3AInputStream#unbuffer should merge input stream stats into fs-wide stats

2019-06-18 Thread GitBox
steveloughran commented on a change in pull request #983: HADOOP-16379: 
S3AInputStream#unbuffer should merge input stream stats into fs-wide stats
URL: https://github.com/apache/hadoop/pull/983#discussion_r294865239
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUnbuffer.java
 ##
 @@ -60,6 +70,66 @@ public void testUnbuffer() throws IOException {
 }
   }
 
+  /**
+   * Test that calling {@link S3AInputStream#unbuffer()} merges a stream's
+   * {@link org.apache.hadoop.fs.s3a.S3AInstrumentation.InputStreamStatistics}
+   * into the {@link S3AFileSystem}'s {@link S3AInstrumentation} instance.
+   */
+  @Test
+  public void testUnbufferStreamStatistics() throws IOException {
+describe("testUnbufferStreamStatistics");
+
+// Create a new S3AFileSystem instance so we can have an independent
+// instance of S3AInstrumentation
+Configuration conf = createConfiguration();
+S3AFileSystem fs = new S3AFileSystem();
+fs.initialize(getFileSystem().getUri(), conf);
+
+// Open file, read half the data, and then call unbuffer
+FSDataInputStream inputStream = null;
+try {
+  inputStream = fs.open(dest);
+
+  // Sanity check to make sure the stream statistics are 0
+  assertEquals(0,
+  fs.getInstrumentation().getCounterValue(STREAM_SEEK_BYTES_READ));
+
+  assertEquals(8, inputStream.read(new byte[8]));
+  inputStream.unbuffer();
+
+  // Validate that calling unbuffer updates the input stream statistics
+  assertEquals(8,
+  fs.getInstrumentation().getCounterValue(STREAM_SEEK_BYTES_READ));
+
+  // Validate that calling unbuffer twice in a row updates the statistics
+  // correctly
+  assertEquals(4, inputStream.read(new byte[4]));
+  inputStream.unbuffer();
+  assertEquals(12,
+  fs.getInstrumentation().getCounterValue(STREAM_SEEK_BYTES_READ));
+} finally {
+  if (inputStream != null) {
 
 Review comment:
   use `IOUtils.closeStream`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16378) RawLocalFileStatus throws exception if a file is created and deleted quickly

2019-06-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1689#comment-1689
 ] 

Steve Loughran commented on HADOOP-16378:
-

I'd prefer moving off shell entirely and into the fs APIs, either java or 
hadoop native. Doesn't it already drop to some native lib if its available?

> RawLocalFileStatus throws exception if a file is created and deleted quickly
> 
>
> Key: HADOOP-16378
> URL: https://issues.apache.org/jira/browse/HADOOP-16378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
> Environment: Ubuntu 18.04, Hadoop 2.7.3 (Though this problem exists 
> on later versions of Hadoop as well), Java 8 ( + Java 11).
>Reporter: K S
>Priority: Critical
>
> Bug occurs when NFS creates temporary ".nfs*" files as part of file moves and 
> accesses. If this file is deleted very quickly after being created, a 
> RuntimeException is thrown. The root cause is in the loadPermissionInfo 
> method in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission 
> info, it first does
>  
> {code:java}
> ls -ld{code}
>  and then attempts to get permissions info about each file. If a file 
> disappears between these two steps, an exception is thrown.
> *Reproduction Steps:*
> An isolated way to reproduce the bug is to run FileInputFormat.listStatus 
> over and over on the same dir that we’re creating those temp files in. On 
> Ubuntu or any other Linux-based system, this should fail intermittently
> *Fix:*
> One way in which we managed to fix this was to ignore the exception being 
> thrown in loadPemissionInfo() if the exit code is 1 or 2. Alternatively, it's 
> possible that turning "useDeprecatedFileStatus" off in RawLocalFileSystem 
> would fix this issue, though we never tested this, and the flag was 
> implemented to fix -HADOOP-9652-. Could also fix in conjunction with 
> HADOOP-8772.
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16380) ITestS3AContractRootDir failing on trunk: tombstones mislead about directory empty status

2019-06-18 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16380:

Affects Version/s: 3.2.0
   3.0.3
   3.1.2

> ITestS3AContractRootDir failing on trunk: tombstones mislead about directory 
> empty status
> -
>
> Key: HADOOP-16380
> URL: https://issues.apache.org/jira/browse/HADOOP-16380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0, 3.0.3, 3.3.0, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> If S3AFileSystem does an S3 LIST restricted to a single object to see if a 
> directory is empty, and the single entry found has a tombstone marker (either 
> from an inconsistent DDB Table or from an eventually consistent LIST) then it 
> will consider the directory empty, _even if there is 1+ entry which is not 
> deleted_
> We need to make sure the calculation of whether a directory is empty or not 
> is resilient to this, efficiently. 
> It surfaces  as an issue two places
> * delete(path) (where it may make things worse)
> * rename(src, dest), where a check is made for dest != an empty directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14936) S3Guard: remove "experimental" from documentation

2019-06-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866651#comment-16866651
 ] 

Steve Loughran commented on HADOOP-14936:
-

adding HADOOP-16380, consistency-related bug surfacing in rename and delete

> S3Guard: remove "experimental" from documentation
> -
>
> Key: HADOOP-14936
> URL: https://issues.apache.org/jira/browse/HADOOP-14936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Steve Loughran
>Priority: Major
>
> I think it is time to remove the "experimental feature" designation in the 
> site docs for S3Guard.  Discuss.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16380) ITestS3AContractRootDir failing on trunk: tombstones mislead about directory empty status

2019-06-18 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16380:

Description: 
If S3AFileSystem does an S3 LIST restricted to a single object to see if a 
directory is empty, and the single entry found has a tombstone marker (either 
from an inconsistent DDB Table or from an eventually consistent LIST) then it 
will consider the directory empty, _even if there is 1+ entry which is not 
deleted_

We need to make sure the calculation of whether a directory is empty or not is 
resilient to this, efficiently. 

It surfaces  as an issue two places

* delete(path) (where it may make things worse)
* rename(src, dest), where a check is made for dest != an empty directory.



  was:
I'm seeing reproducible failures of {{ITestS3AContractRootDir}} which look like 
consistency problems *even when S3Guard is enabled*. 

Suspicion: root dir listings are still inconsistent, due to the way we don't 
keep root entries in the table


> ITestS3AContractRootDir failing on trunk: tombstones mislead about directory 
> empty status
> -
>
> Key: HADOOP-16380
> URL: https://issues.apache.org/jira/browse/HADOOP-16380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> If S3AFileSystem does an S3 LIST restricted to a single object to see if a 
> directory is empty, and the single entry found has a tombstone marker (either 
> from an inconsistent DDB Table or from an eventually consistent LIST) then it 
> will consider the directory empty, _even if there is 1+ entry which is not 
> deleted_
> We need to make sure the calculation of whether a directory is empty or not 
> is resilient to this, efficiently. 
> It surfaces  as an issue two places
> * delete(path) (where it may make things worse)
> * rename(src, dest), where a check is made for dest != an empty directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16380) ITestS3AContractRootDir failing on trunk: tombstones mislead about directory empty status

2019-06-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866642#comment-16866642
 ] 

Steve Loughran commented on HADOOP-16380:
-

note: some of my rename optimisations may have made the likelihood of the store 
going inconsistent higher, as addAncestors is now used to scan up the tree to 
decide when to stop, whereas put() always put everything up. I'm going to 
reinstate the original put code (with the caching of the state in the 
AncestorState), so that if you create a file with a tombstone some directories 
up, it will always be deleted, even if there is 1+ valid parent in between

> ITestS3AContractRootDir failing on trunk: tombstones mislead about directory 
> empty status
> -
>
> Key: HADOOP-16380
> URL: https://issues.apache.org/jira/browse/HADOOP-16380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> I'm seeing reproducible failures of {{ITestS3AContractRootDir}} which look 
> like consistency problems *even when S3Guard is enabled*. 
> Suspicion: root dir listings are still inconsistent, due to the way we don't 
> keep root entries in the table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran edited a comment on issue #985: HADOOP-16380. ITestS3AContractRootDir failing on trunk

2019-06-18 Thread GitBox
steveloughran edited a comment on issue #985: HADOOP-16380. 
ITestS3AContractRootDir failing on trunk
URL: https://github.com/apache/hadoop/pull/985#issuecomment-503155299
 
 
   My S3 store was somehow out of sync w/ s3guard: the table has the file 
s3a://hwdev-steve-ireland-new/fork-0007/test/testEmptyDir/file1 and has 30+ 
tombstone markers, including one for the /fork-007/ path.
   
   Because there is an entry in the store, the LIST call returns /fork-007; 
because of the tombstone it is then filtered, leaving the code to consume that 
this is an empty dir.
   
   Now, this is due to in inconsistent store: I'm going to delete that object 
and should all resolve. But even if you don't have explicit inconsistencies, 
you would get it from an inconsistent listing anyway.
   
   ## underlying problem.
   ```
   java.lang.AssertionError: More files found in listFiles(root, false): 
[s3a://hwdev-steve-ireland-new/fork-0007 s3a://hwdev-steve-ireland-new/test] 
than in listStatus(root): [s3a://hwdev-steve-ireland-new/test] 
   Expected :1
   Actual   :2
   
   
   
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testSimpleRootListing(AbstractContractRootDirectoryTest.java:254)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
   ```
   
   ## failure to delete empty root dir: 
   
   ```
   java.lang.AssertionError: expected file to be deleted: unexpectedly found 
/testRmRootRecursive as  
S3AFileStatus{path=s3a://hwdev-steve-ireland-new/testRmRootRecursive; 
isDirectory=false; length=0; replication=1; blocksize=33554432; 
modification_time=1560866625691; access_time=0; owner=stevel; group=stevel; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=true; 
isErasureCoded=false} isEmptyDirectory=FALSE 
eTag=d41d8cd98f00b204e9800998ecf8427e versionId=vOlpFf.CxicmjPPVMj76yJTeCtEnqSyu
   
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.assertPathDoesNotExist(ContractTestUtils.java:977)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathDoesNotExist(AbstractFSContractTestBase.java:305)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmRootRecursive(AbstractContractRootDirectoryTest.java:196)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
   ```
   
   ```
   

[GitHub] [hadoop] steveloughran commented on issue #985: HADOOP-16380. ITestS3AContractRootDir failing on trunk

2019-06-18 Thread GitBox
steveloughran commented on issue #985: HADOOP-16380. ITestS3AContractRootDir 
failing on trunk
URL: https://github.com/apache/hadoop/pull/985#issuecomment-503155299
 
 
   My S3 store was somehow out of sync w/ s3guard: the table has a fork-0007 
   
   ## underlying problem.
   ```
   java.lang.AssertionError: More files found in listFiles(root, false): 
[s3a://hwdev-steve-ireland-new/fork-0007 s3a://hwdev-steve-ireland-new/test] 
than in listStatus(root): [s3a://hwdev-steve-ireland-new/test] 
   Expected :1
   Actual   :2
   
   
   
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testSimpleRootListing(AbstractContractRootDirectoryTest.java:254)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
   ```
   
   ## failure to delete empty root dir: java.lang.AssertionError: expected file 
to be deleted: unexpectedly found /testRmRootRecursive as  
S3AFileStatus{path=s3a://hwdev-steve-ireland-new/testRmRootRecursive; 
isDirectory=false; length=0; replication=1; blocksize=33554432; 
modification_time=1560866625691; access_time=0; owner=stevel; group=stevel; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=true; 
isErasureCoded=false} isEmptyDirectory=FALSE 
eTag=d41d8cd98f00b204e9800998ecf8427e versionId=vOlpFf.CxicmjPPVMj76yJTeCtEnqSyu
   
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.assertPathDoesNotExist(ContractTestUtils.java:977)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathDoesNotExist(AbstractFSContractTestBase.java:305)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmRootRecursive(AbstractContractRootDirectoryTest.java:196)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
   ```
   
   ```
   
   java.lang.AssertionError: fs.listFiles(root, true) found 
S3ALocatedFileStatus{path=s3a://hwdev-steve-ireland-new/fork-0007/test/testEmptyDir/file1;
 isDirectory=false; length=0; replication=1; blocksize=33554432; 
modification_time=1560796206000; access_time=0; owner=stevel; group=stevel; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=true; 
isErasureCoded=false}
   
at org.junit.Assert.fail(Assert.java:88)
at 

[GitHub] [hadoop] steveloughran opened a new pull request #985: HADOOP-16380. ITestS3AContractRootDir failing on trunk

2019-06-18 Thread GitBox
steveloughran opened a new pull request #985: HADOOP-16380. 
ITestS3AContractRootDir failing on trunk
URL: https://github.com/apache/hadoop/pull/985
 
 
   Cause: tombstones mislead about directory empty status
   
   This is not the fix, though it provides diagnostics about the problem with 
more checks and more details in assertions.
   
   Change-Id: I583071b254a89f64687b87e653afd01d65a8e8de


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16380) ITestS3AContractRootDir failing on trunk: tombstones mislead about directory empty status

2019-06-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866605#comment-16866605
 ] 

Steve Loughran commented on HADOOP-16380:
-

Note, there are some other places where innerGetFileStatus(path, true) are used 
which are vulnerable to the same problem

* innerRename checks the empty dir status on src and dest paths. the source 
check doesn't seem to need it, but the destDir check is needed to block 
renaming onto a non-empty directory
* Some tests call it too; less important

This problem only exists when S3Guard is running (it has the tombstones) , and 
then iff the deleted paths come first in the listing. i.e. it doesn't surface 
if the file is {{aaa}} and the dir {{zzz}}, but it does the other way around

One possible fix related to HADOOP-15988: even if !authoritative (remember, / 
is always tagged as nonauth), then we could still set the emptyDir marker == 
false if we had 1+ non-expired child. As we could be confident that yes, it was 
not empty. That would be enough to block the two operations (rename, delete) 
which care not that a directory has children, only that it is non-empty. Does 
that make sense?

+[~gabor.bota], [~ajfabbri]


> ITestS3AContractRootDir failing on trunk: tombstones mislead about directory 
> empty status
> -
>
> Key: HADOOP-16380
> URL: https://issues.apache.org/jira/browse/HADOOP-16380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> I'm seeing reproducible failures of {{ITestS3AContractRootDir}} which look 
> like consistency problems *even when S3Guard is enabled*. 
> Suspicion: root dir listings are still inconsistent, due to the way we don't 
> keep root entries in the table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16380) ITestS3AContractRootDir failing on trunk: tombstones mislead about directory empty status

2019-06-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866597#comment-16866597
 ] 

Steve Loughran commented on HADOOP-16380:
-

Cause (Which you can verify by changing the limit on L2626 in S3AFileSystem

{code}
 S3ListRequest request = createListObjectsRequest(key, path, 1);
{code}

If the single entry returned by the LIST is one which has a tombstone marker, 
then the child list is considered empty, so the empty directory marker is set. 
As a result, in a delete(path, false) operation, the S3AFilesystem will see the 
empty dir marker and conclude that (a) the directory is free to be deleted 
(even if recursive==false), and implement the delete simply by adding an empty 
directory marker at the destination. Which then creates orphan files underneath.

I can verify this because changing the limit to a number > than the number of 
outstanding tombstones stops the test {{testRmNonEmptyRootDirNonRecursive}} 
failing.

{code}
 S3ListRequest request = createListObjectsRequest(key, path, 100);
{code}

h2. Analysis

This is a problem which has been lurking for some time. It's been surfacing 
more on the root dir tests because its under root where lots of files get 
created and deleted, so it has the most tombstone markers, and the possibility 
exists that under the list there's an object dir1/file4 which exists in the 
store but where there is a tombstone marker for dir1/ At which point things get 
confused.

h3. Store status _as returned by LIST_
{code}
/dir1/file2
/file1
{code}

This could be the actual store status, or it could be the status as listed due 
to eventual consistency on the delete call

h3. metastore status

{code}
/dir1 -> isDeleted
/file1 -> is file
{code}

h3. list response

{code}
LIST (prefix=/, delim=/, limit=1) =>
 commonprefixes=dir1
 truncated=true
{code}

h3. S3AFileSystem.innerGetFileStatus("/", needEmptyDirectory=true)

# does the LIST call
# gets the /dir entry back (its sorted ahead of "file")
# sees that there is a tombstone
# returns an S3AFileStatus with emptyDirectory = true

* For any path other than root, any empty dir marker is maybe a parent is 
created, and S3Guard a tombstone is added
* For /, the delete call will simply downgrade to a no-op

h2. Solutions

This isn't something which can be defended against by tombstones. What we 
should be doing is looking for when we need an empty dir is immediately 
conclude that the dir is not-empty if there is any entry in the metastore which 
is not a tombstone. This will let us bypass S3 too

Really we should be combining that getFileStatus with the real listing; there's 
no need to duplicate the LIST calls





> ITestS3AContractRootDir failing on trunk: tombstones mislead about directory 
> empty status
> -
>
> Key: HADOOP-16380
> URL: https://issues.apache.org/jira/browse/HADOOP-16380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> I'm seeing reproducible failures of {{ITestS3AContractRootDir}} which look 
> like consistency problems *even when S3Guard is enabled*. 
> Suspicion: root dir listings are still inconsistent, due to the way we don't 
> keep root entries in the table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #984: HDDS-1674 Make ScmBlockLocationProtocol message type based

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #984: HDDS-1674 Make ScmBlockLocationProtocol 
message type based
URL: https://github.com/apache/hadoop/pull/984#issuecomment-503126050
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 549 | trunk passed |
   | +1 | compile | 291 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 969 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | trunk passed |
   | 0 | spotbugs | 346 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 539 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 477 | the patch passed |
   | +1 | compile | 299 | the patch passed |
   | +1 | cc | 299 | the patch passed |
   | +1 | javac | 299 | the patch passed |
   | -0 | checkstyle | 41 | hadoop-hdds: The patch generated 21 new + 0 
unchanged - 0 fixed = 21 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 761 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | +1 | findbugs | 605 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 210 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2104 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7591 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/984 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux c27ba5a6a7b7 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 335c1c9 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/1/testReport/ |
   | Max. process+thread count | 4887 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-984/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16380) ITestS3AContractRootDir failing on trunk

2019-06-18 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16380:

Priority: Critical  (was: Major)

> ITestS3AContractRootDir failing on trunk
> 
>
> Key: HADOOP-16380
> URL: https://issues.apache.org/jira/browse/HADOOP-16380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> I'm seeing reproducible failures of {{ITestS3AContractRootDir}} which look 
> like consistency problems *even when S3Guard is enabled*. 
> Suspicion: root dir listings are still inconsistent, due to the way we don't 
> keep root entries in the table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16380) ITestS3AContractRootDir failing on trunk: tombstones mislead about directory empty status

2019-06-18 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16380:

Summary: ITestS3AContractRootDir failing on trunk: tombstones mislead about 
directory empty status  (was: ITestS3AContractRootDir failing on trunk)

> ITestS3AContractRootDir failing on trunk: tombstones mislead about directory 
> empty status
> -
>
> Key: HADOOP-16380
> URL: https://issues.apache.org/jira/browse/HADOOP-16380
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> I'm seeing reproducible failures of {{ITestS3AContractRootDir}} which look 
> like consistency problems *even when S3Guard is enabled*. 
> Suspicion: root dir listings are still inconsistent, due to the way we don't 
> keep root entries in the table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #977: (HDFS-14541)when evictableMmapped or evictable size is zero, do not throw NoSuchE…

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #977: (HDFS-14541)when evictableMmapped or 
evictable size is zero, do not throw NoSuchE… 
URL: https://github.com/apache/hadoop/pull/977#issuecomment-503083218
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1037 | trunk passed |
   | +1 | compile | 44 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 48 | trunk passed |
   | +1 | shadedclient | 735 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 31 | trunk passed |
   | 0 | spotbugs | 124 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 122 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 41 | the patch passed |
   | +1 | compile | 37 | the patch passed |
   | +1 | javac | 37 | the patch passed |
   | +1 | checkstyle | 16 | the patch passed |
   | +1 | mvnsite | 42 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | the patch passed |
   | +1 | findbugs | 133 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 129 | hadoop-hdfs-client in the patch passed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 3385 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-977/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/977 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 181fc0d8940b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 335c1c9 |
   | Default Java | 1.8.0_212 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-977/3/testReport/ |
   | Max. process+thread count | 444 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-977/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ehiggs commented on a change in pull request #743: HADOOP-11452 make rename/3 public

2019-06-18 Thread GitBox
ehiggs commented on a change in pull request #743: HADOOP-11452 make rename/3 
public
URL: https://github.com/apache/hadoop/pull/743#discussion_r294778507
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/RenameHelper.java
 ##
 @@ -0,0 +1,174 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.impl;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.Optional;
+
+import org.slf4j.Logger;
+
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Options;
+import org.apache.hadoop.fs.ParentNotDirectoryException;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathIOException;
+
+import static 
org.apache.hadoop.fs.FSExceptionMessages.RENAME_DEST_EQUALS_SOURCE;
 
 Review comment:
   ```suggestion
   import static org.apache.hadoop.fs.FSExceptionMessages.*;
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #977: (HDFS-14541)when evictableMmapped or evictable size is zero, do not throw NoSuchE…

2019-06-18 Thread GitBox
hadoop-yetus commented on issue #977: (HDFS-14541)when evictableMmapped or 
evictable size is zero, do not throw NoSuchE… 
URL: https://github.com/apache/hadoop/pull/977#issuecomment-503077874
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1120 | trunk passed |
   | +1 | compile | 43 | trunk passed |
   | +1 | checkstyle | 22 | trunk passed |
   | +1 | mvnsite | 45 | trunk passed |
   | +1 | shadedclient | 756 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 29 | trunk passed |
   | 0 | spotbugs | 127 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 126 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 39 | the patch passed |
   | +1 | compile | 37 | the patch passed |
   | +1 | javac | 37 | the patch passed |
   | +1 | checkstyle | 16 | the patch passed |
   | +1 | mvnsite | 40 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 807 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 30 | the patch passed |
   | +1 | findbugs | 142 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 117 | hadoop-hdfs-client in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3586 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-977/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/977 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4b4a9d84104c 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 335c1c9 |
   | Default Java | 1.8.0_212 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-977/2/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-977/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-18 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866544#comment-16866544
 ] 

Prabhu Joseph commented on HADOOP-16377:


[~jojochuang] Below places are left with commons-logging after 
[^HADOOP-16377-001.patch]:
 # {{IOUtils}}, {{ServiceOperations}}, {{ReflectionUtils}} and 
{{GenericTestUtils}} has public apis which are already Deprecated. Do you know 
when they can be removed after marking Deprecated.
 # {{ITestFileSystemOperationsWithThreads}}, 
{{ITestNativeAzureFileSystemClientLogging}} testcases requires commons-logging 
(HADOOP-14573)

*Functional Testing:*
{code:java}
1. LogLevel: 
yarn daemonlog -setlevel `hostname -f`:8088 org.apache.hadoop DBEUG


2. Namenode FSNamesystem Audit Log:
log4j.appender.FSN=org.apache.log4j.RollingFileAppender
log4j.appender.FSN.File=/HADOOP/hadoop/logs/fsn.log
log4j.appender.FSN.layout=org.apache.log4j.PatternLayout
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG,FSN

hdfs dfsadmin -listOpenFiles -path /DATA


3. ResourceManager HttpRequest Log:
log4j.logger.http.requests.resourcemanager=INFO,resourcemanagerrequestlog
log4j.appender.resourcemanagerrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
log4j.appender.resourcemanagerrequestlog.Filename=${hadoop.log.dir}/jetty-resourcemanager-_mm_dd.log
log4j.appender.resourcemanagerrequestlog.RetainDays=3


4. NameNode Metrics Logger:
dfs.namenode.metrics.logger.period.seconds = 10

namenode.metrics.logger=INFO,NNMETRICSRFA
log4j.logger.NameNodeMetricsLog=${namenode.metrics.logger}
log4j.appender.NNMETRICSRFA=org.apache.log4j.RollingFileAppender
log4j.appender.NNMETRICSRFA.File=${hadoop.log.dir}/namenode-metrics.log


5. DataNode Metrics Logger:
dfs.datanode.metrics.logger.period.seconds = 10

datanode.metrics.logger=INFO,DNMETRICSRFA
log4j.logger.DataNodeMetricsLog=${datanode.metrics.logger}
log4j.appender.DNMETRICSRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DNMETRICSRFA.File=${hadoop.log.dir}/datanode-metrics.log


6. DataNode Client Trace:
log4j.logger.org.apache.hadoop.hdfs.server.datanode.DataNode=DEBUG,CLIENTTRACE
log4j.appender.CLIENTTRACE=org.apache.log4j.RollingFileAppender
log4j.appender.CLIENTTRACE.File=${hadoop.log.dir}/clienttrace.log
log4j.appender.CLIENTTRACE.layout=org.apache.log4j.PatternLayout


7. Namenode, Datanode and Resourcemanager startup and client operations.

{code}

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16377-001.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16184) S3Guard: Handle OOB deletions and creation of a file which has a tombstone marker

2019-06-18 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16184.
-
Resolution: Fixed

> S3Guard: Handle OOB deletions and creation of a file which has a tombstone 
> marker
> -
>
> Key: HADOOP-16184
> URL: https://issues.apache.org/jira/browse/HADOOP-16184
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> When a file is deleted in S3 using S3Guard a tombstone marker will be added 
> for that file in the MetadataStore. If another process creates the file 
> without using S3Guard (as an out of band operation - OOB) the file still not 
> be visible for the client using S3Guard because of the deletion tombstone.
> 
> The whole of S3Guard is potentially brittle to
>  * OOB deletions: we skip it in HADOOP-15999, so no worse, but because the 
> S3AInputStream retries on FNFE, so as to "debounce" cached 404s, it's 
> potentially going to retry forever.
>  * OOB creation of a file which has a deletion tombstone marker.
> The things this issue covers:
>  * Write a test to simulate that deletion problem, to see what happens. We 
> ought to have the S3AInputStream retry briefly on that initial GET failing, 
> but only on that initial one. (after setting "fs.s3a.retry.limit" to 
> something low & the interval down to 10ms or so to fail fast)
>  * Sequences
> {noformat}
> 1. create; delete; open; read -> fail after retry
> 2. create; open, read, delete, read -> fail fast on the second read
> {noformat}
> The StoreStatistics of the filesystem's IGNORED_ERRORS stat will be increased 
> on the ignored error, so on sequence 1 will have increased, whereas on 
> sequence 2 it will not have. If either of these tests don't quite fail as 
> expected, we can disable the tests and continue, at least now with some tests 
> to simulate a condition we don't have a fix for.
>  * For both, we just need to have some model of how long it takes for 
> debouncing to stabilize. Then in this new check, if an FNFE is raised and the 
> check is happening > (modtime+ debounce-delay) then it's a real FNFE.
> This issue is created based on [~ste...@apache.org] remarks and comments on 
> HADOOP-15999.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16377:
---
Status: Patch Available  (was: Open)

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16377-001.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-18 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16377:
---
Attachment: HADOOP-16377-001.patch

> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16377-001.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on issue #984: HDDS-1674 Make ScmBlockLocationProtocol message type based

2019-06-18 Thread GitBox
sodonnel commented on issue #984: HDDS-1674 Make ScmBlockLocationProtocol 
message type based
URL: https://github.com/apache/hadoop/pull/984#issuecomment-503054397
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel opened a new pull request #984: HDDS-1674 Make ScmBlockLocationProtocol message type based

2019-06-18 Thread GitBox
sodonnel opened a new pull request #984: HDDS-1674 Make 
ScmBlockLocationProtocol message type based
URL: https://github.com/apache/hadoop/pull/984
 
 
   This PR is a first attempt at refactoring the ScmBlockLocationProtocol using 
a single message type as is used in the OzoneManagerProtocol. In this change, 
the new message wraps the existing messages and the translator classes simply 
wrap or unwrap it.
   
   Only TraceID has been moved to the wrapper message - Moving error handling 
and error codes to the wrapper will be done in a separate change.
   
   Before this can be merged we still need to determine if the clientId should 
be present in the  ScmBlockLocationProtocol. It is in OzoneManagerProtocol and 
has been replicated here for now, but it can be removed if needed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13980) S3Guard CLI: Add fsck check command

2019-06-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866468#comment-16866468
 ] 

Steve Loughran commented on HADOOP-13980:
-

Extra inconsistency I've managed to somehow create in my own code

* directory is a tombstone
* child exists

I think this may be from some of the rename optimisations of HADOOP-15183: are 
we trying to be too clever in walking up the tree in addAncestors, and we 
should always try to write up to the top? 

> S3Guard CLI: Add fsck check command
> ---
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 
> invariants are violated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #951: HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename

2019-06-18 Thread GitBox
steveloughran commented on issue #951: HADOOP-15183. S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/951#issuecomment-503045141
 
 
   Last little refactoring caused whitespace issues. Will fix
   ```
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/InternalConstants.java:24:public
 class InternalConstants {:1: Utility classes should not have a public or 
default constructor. [HideUtilityClassConstructor]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:153:
  public RenameOperation(:10: More than 7 parameters (found 8). 
[ParameterNumber]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:399:
  final S3ALocatedFileStatus sourceStatus,:34: 'sourceStatus' hides a 
field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:438:
  private Path copySourceAndUpdateTracker(:16: More than 7 parameters (found 
8). [ParameterNumber]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:439:
  final RenameTracker renameTracker,:27: 'renameTracker' hides a field. 
[HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:440:
  final Path sourcePath,:18: 'sourcePath' hides a field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:444:
  final Path destPath,:18: 'destPath' hides a field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:445:
  final String destKey,:20: 'destKey' hides a field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:487:
  final RenameTracker renameTracker,:27: 'renameTracker' hides a field. 
[HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:488:
  final List keysToDelete,:51: 
'keysToDelete' hides a field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:489:
  final List pathsToDelete):24: 'pathsToDelete' hides a field. 
[HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:541:
final Path path,:9: Redundant 'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:542:
final String eTag,:9: Redundant 'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:543:
final String versionId,:9: Redundant 'final' modifier. 
[RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:544:
final long len);:9: Redundant 'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:552:
final S3AFileStatus fileStatus);:9: Redundant 'final' modifier. 
[RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:561:
final FileStatus fileStatus);:9: Redundant 'final' modifier. 
[RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:563:
/**: First sentence should end with a period. [JavadocStyle]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:633:
final List keysToDelete,:9: Redundant 
'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:634:
final boolean deleteFakeDir,:9: Redundant 'final' modifier. 
[RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:635:
final List undeletedObjectsOnFailure):9: Redundant 'final' 
modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java:119:
  public StoreContext(:10: More than 7 parameters (found 17). [ParameterNumber]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java:50:import
 org.apache.hadoop.fs.s3a.Tristate;:8: Unused import - 
org.apache.hadoop.fs.s3a.Tristate. [UnusedImports]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java:525:
   * {@link MetadataStore#addAncestors(Path, ITtlTimeProvider, 
BulkOperationState)}.: Line is longer than 80 characters (found 84). 
[LineLength]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java:1163:
clearBucketOption(unguardedConf, fsURI.getHost(), 
S3_METADATA_STORE_IMPL);: Line is longer 

[GitHub] [hadoop] steveloughran removed a comment on issue #879: HADOOP-15563 S3Guard to create on-demand DDB tables

2019-06-18 Thread GitBox
steveloughran removed a comment on issue #879: HADOOP-15563 S3Guard to create 
on-demand DDB tables
URL: https://github.com/apache/hadoop/pull/879#issuecomment-503038386
 
 
   Last little refactoring caused whitespace issues. Will fix
   ```
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/InternalConstants.java:24:public
 class InternalConstants {:1: Utility classes should not have a public or 
default constructor. [HideUtilityClassConstructor]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:153:
  public RenameOperation(:10: More than 7 parameters (found 8). 
[ParameterNumber]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:399:
  final S3ALocatedFileStatus sourceStatus,:34: 'sourceStatus' hides a 
field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:438:
  private Path copySourceAndUpdateTracker(:16: More than 7 parameters (found 
8). [ParameterNumber]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:439:
  final RenameTracker renameTracker,:27: 'renameTracker' hides a field. 
[HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:440:
  final Path sourcePath,:18: 'sourcePath' hides a field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:444:
  final Path destPath,:18: 'destPath' hides a field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:445:
  final String destKey,:20: 'destKey' hides a field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:487:
  final RenameTracker renameTracker,:27: 'renameTracker' hides a field. 
[HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:488:
  final List keysToDelete,:51: 
'keysToDelete' hides a field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:489:
  final List pathsToDelete):24: 'pathsToDelete' hides a field. 
[HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:541:
final Path path,:9: Redundant 'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:542:
final String eTag,:9: Redundant 'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:543:
final String versionId,:9: Redundant 'final' modifier. 
[RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:544:
final long len);:9: Redundant 'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:552:
final S3AFileStatus fileStatus);:9: Redundant 'final' modifier. 
[RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:561:
final FileStatus fileStatus);:9: Redundant 'final' modifier. 
[RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:563:
/**: First sentence should end with a period. [JavadocStyle]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:633:
final List keysToDelete,:9: Redundant 
'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:634:
final boolean deleteFakeDir,:9: Redundant 'final' modifier. 
[RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:635:
final List undeletedObjectsOnFailure):9: Redundant 'final' 
modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java:119:
  public StoreContext(:10: More than 7 parameters (found 17). [ParameterNumber]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java:50:import
 org.apache.hadoop.fs.s3a.Tristate;:8: Unused import - 
org.apache.hadoop.fs.s3a.Tristate. [UnusedImports]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java:525:
   * {@link MetadataStore#addAncestors(Path, ITtlTimeProvider, 
BulkOperationState)}.: Line is longer than 80 characters (found 84). 
[LineLength]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java:1163:
clearBucketOption(unguardedConf, fsURI.getHost(), 
S3_METADATA_STORE_IMPL);: Line is longer than 80 characters 

[GitHub] [hadoop] steveloughran closed pull request #879: HADOOP-15563 S3Guard to create on-demand DDB tables

2019-06-18 Thread GitBox
steveloughran closed pull request #879: HADOOP-15563 S3Guard to create 
on-demand DDB tables
URL: https://github.com/apache/hadoop/pull/879
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2019-06-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866454#comment-16866454
 ] 

Hadoop QA commented on HADOOP-15893:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 14s{color} | {color:orange} root: The patch generated 4 new + 9 unchanged - 
0 fixed = 13 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
56s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}221m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-15893 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948449/HADOOP-15893.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1d77f374dbcf 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 54cdde3 |
| maven | version: Apache Maven 

[GitHub] [hadoop] steveloughran commented on issue #879: HADOOP-15563 S3Guard to create on-demand DDB tables

2019-06-18 Thread GitBox
steveloughran commented on issue #879: HADOOP-15563 S3Guard to create on-demand 
DDB tables
URL: https://github.com/apache/hadoop/pull/879#issuecomment-503038386
 
 
   Last little refactoring caused whitespace issues. Will fix
   ```
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/InternalConstants.java:24:public
 class InternalConstants {:1: Utility classes should not have a public or 
default constructor. [HideUtilityClassConstructor]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:153:
  public RenameOperation(:10: More than 7 parameters (found 8). 
[ParameterNumber]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:399:
  final S3ALocatedFileStatus sourceStatus,:34: 'sourceStatus' hides a 
field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:438:
  private Path copySourceAndUpdateTracker(:16: More than 7 parameters (found 
8). [ParameterNumber]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:439:
  final RenameTracker renameTracker,:27: 'renameTracker' hides a field. 
[HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:440:
  final Path sourcePath,:18: 'sourcePath' hides a field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:444:
  final Path destPath,:18: 'destPath' hides a field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:445:
  final String destKey,:20: 'destKey' hides a field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:487:
  final RenameTracker renameTracker,:27: 'renameTracker' hides a field. 
[HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:488:
  final List keysToDelete,:51: 
'keysToDelete' hides a field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:489:
  final List pathsToDelete):24: 'pathsToDelete' hides a field. 
[HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:541:
final Path path,:9: Redundant 'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:542:
final String eTag,:9: Redundant 'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:543:
final String versionId,:9: Redundant 'final' modifier. 
[RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:544:
final long len);:9: Redundant 'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:552:
final S3AFileStatus fileStatus);:9: Redundant 'final' modifier. 
[RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:561:
final FileStatus fileStatus);:9: Redundant 'final' modifier. 
[RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:563:
/**: First sentence should end with a period. [JavadocStyle]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:633:
final List keysToDelete,:9: Redundant 
'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:634:
final boolean deleteFakeDir,:9: Redundant 'final' modifier. 
[RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RenameOperation.java:635:
final List undeletedObjectsOnFailure):9: Redundant 'final' 
modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java:119:
  public StoreContext(:10: More than 7 parameters (found 17). [ParameterNumber]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java:50:import
 org.apache.hadoop.fs.s3a.Tristate;:8: Unused import - 
org.apache.hadoop.fs.s3a.Tristate. [UnusedImports]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java:525:
   * {@link MetadataStore#addAncestors(Path, ITtlTimeProvider, 
BulkOperationState)}.: Line is longer than 80 characters (found 84). 
[LineLength]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java:1163:
clearBucketOption(unguardedConf, fsURI.getHost(), 
S3_METADATA_STORE_IMPL);: Line is longer than 80 characters (found 82). 

[jira] [Created] (HADOOP-16380) ITestS3AContractRootDir failing on trunk

2019-06-18 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16380:
---

 Summary: ITestS3AContractRootDir failing on trunk
 Key: HADOOP-16380
 URL: https://issues.apache.org/jira/browse/HADOOP-16380
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


I'm seeing reproducible failures of {{ITestS3AContractRootDir}} which look like 
consistency problems *even when S3Guard is enabled*. 

Suspicion: root dir listings are still inconsistent, due to the way we don't 
keep root entries in the table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16184) S3Guard: Handle OOB deletions and creation of a file which has a tombstone marker

2019-06-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866432#comment-16866432
 ] 

Steve Loughran commented on HADOOP-16184:
-

Gabor: is this fixed in HADOOP-16279? If so we can close it

> S3Guard: Handle OOB deletions and creation of a file which has a tombstone 
> marker
> -
>
> Key: HADOOP-16184
> URL: https://issues.apache.org/jira/browse/HADOOP-16184
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> When a file is deleted in S3 using S3Guard a tombstone marker will be added 
> for that file in the MetadataStore. If another process creates the file 
> without using S3Guard (as an out of band operation - OOB) the file still not 
> be visible for the client using S3Guard because of the deletion tombstone.
> 
> The whole of S3Guard is potentially brittle to
>  * OOB deletions: we skip it in HADOOP-15999, so no worse, but because the 
> S3AInputStream retries on FNFE, so as to "debounce" cached 404s, it's 
> potentially going to retry forever.
>  * OOB creation of a file which has a deletion tombstone marker.
> The things this issue covers:
>  * Write a test to simulate that deletion problem, to see what happens. We 
> ought to have the S3AInputStream retry briefly on that initial GET failing, 
> but only on that initial one. (after setting "fs.s3a.retry.limit" to 
> something low & the interval down to 10ms or so to fail fast)
>  * Sequences
> {noformat}
> 1. create; delete; open; read -> fail after retry
> 2. create; open, read, delete, read -> fail fast on the second read
> {noformat}
> The StoreStatistics of the filesystem's IGNORED_ERRORS stat will be increased 
> on the ignored error, so on sequence 1 will have increased, whereas on 
> sequence 2 it will not have. If either of these tests don't quite fail as 
> expected, we can disable the tests and continue, at least now with some tests 
> to simulate a condition we don't have a fix for.
>  * For both, we just need to have some model of how long it takes for 
> debouncing to stabilize. Then in this new check, if an FNFE is raised and the 
> check is happening > (modtime+ debounce-delay) then it's a real FNFE.
> This issue is created based on [~ste...@apache.org] remarks and comments on 
> HADOOP-15999.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14936) S3Guard: remove "experimental" from documentation

2019-06-18 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-14936:
---

Assignee: Steve Loughran

> S3Guard: remove "experimental" from documentation
> -
>
> Key: HADOOP-14936
> URL: https://issues.apache.org/jira/browse/HADOOP-14936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Steve Loughran
>Priority: Major
>
> I think it is time to remove the "experimental feature" designation in the 
> site docs for S3Guard.  Discuss.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #852: HDDS-1454. GC other system pause events can trigger pipeline destroy for all the nodes in the cluster. Contributed by Suprat

2019-06-18 Thread GitBox
nandakumar131 commented on a change in pull request #852: HDDS-1454. GC other 
system pause events can trigger pipeline destroy for all the nodes in the 
cluster. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/852#discussion_r294696511
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -558,41 +619,40 @@ public void run() {
   heartbeatCheckerIntervalMs);
 }
 
-// we purposefully make this non-deterministic. Instead of using a
-// scheduleAtFixedFrequency  we will just go to sleep
-// and wake up at the next rendezvous point, which is currentTime +
-// heartbeatCheckerIntervalMs. This leads to the issue that we are now
-// heart beating not at a fixed cadence, but clock tick + time taken to
-// work.
-//
-// This time taken to work can skew the heartbeat processor thread.
-// The reason why we don't care is because of the following reasons.
-//
-// 1. checkerInterval is general many magnitudes faster than datanode HB
-// frequency.
-//
-// 2. if we have too much nodes, the SCM would be doing only HB
-// processing, this could lead to SCM's CPU starvation. With this
-// approach we always guarantee that  HB thread sleeps for a little while.
-//
-// 3. It is possible that we will never finish processing the HB's in the
-// thread. But that means we have a mis-configured system. We will warn
-// the users by logging that information.
-//
-// 4. And the most important reason, heartbeats are not blocked even if
-// this thread does not run, they will go into the processing queue.
+  }
+
+  private void scheduleNextHealthCheck() {
 
 if (!Thread.currentThread().isInterrupted() &&
 !executorService.isShutdown()) {
   //BUGBUG: The return future needs to checked here to make sure the
   // exceptions are handled correctly.
-  executorService.schedule(this, heartbeatCheckerIntervalMs,
-  TimeUnit.MILLISECONDS);
+  healthCheckFuture = executorService.schedule(this,
+  heartbeatCheckerIntervalMs, TimeUnit.MILLISECONDS);
 } else {
-  LOG.info("Current Thread is interrupted, shutting down HB processing " +
+  LOG.warn("Current Thread is interrupted, shutting down HB processing " +
   "thread for Node Manager.");
 }
 
+lastHealthCheck = Time.monotonicNow();
+  }
+
+  /**
+   * if the time since last check exceeds the stale|dead node interval, skip.
+   * such long delays might be caused by a JVM pause. SCM cannot make reliable
+   * conclusions about datanode health in such situations.
+   * @return : true indicates skip HB checks
+   */
+  private boolean shouldSkipCheck() {
+
+long currentTime = Time.monotonicNow();
+long minInterval = Math.min(staleNodeIntervalMs, deadNodeIntervalMs);
+
+if ((currentTime - lastHealthCheck) >= minInterval) {
+  return true;
+}
+
+return false;
 
 Review comment:
   The return statement can be simplified
   `return (currentTime - lastHealthCheck) >= minInterval;`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #852: HDDS-1454. GC other system pause events can trigger pipeline destroy for all the nodes in the cluster. Contributed by Suprat

2019-06-18 Thread GitBox
nandakumar131 commented on a change in pull request #852: HDDS-1454. GC other 
system pause events can trigger pipeline destroy for all the nodes in the 
cluster. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/852#discussion_r294695045
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -464,6 +487,44 @@ public void setContainers(UUID uuid, Set 
containerIds)
   @Override
   public void run() {
 
+if (shouldSkipCheck()) {
+  skippedHealthChecks++;
+  LOG.info("Detected long delay in scheduling HB processing thread. "
+  + "Skipping heartbeat checks for one iteration.");
+} else {
+  checkNodesHealth();
+}
+
+// we purposefully make this non-deterministic. Instead of using a
+// scheduleAtFixedFrequency  we will just go to sleep
+// and wake up at the next rendezvous point, which is currentTime +
+// heartbeatCheckerIntervalMs. This leads to the issue that we are now
+// heart beating not at a fixed cadence, but clock tick + time taken to
+// work.
+//
+// This time taken to work can skew the heartbeat processor thread.
+// The reason why we don't care is because of the following reasons.
+//
+// 1. checkerInterval is general many magnitudes faster than datanode HB
+// frequency.
+//
+// 2. if we have too much nodes, the SCM would be doing only HB
+// processing, this could lead to SCM's CPU starvation. With this
+// approach we always guarantee that  HB thread sleeps for a little while.
+//
+// 3. It is possible that we will never finish processing the HB's in the
+// thread. But that means we have a mis-configured system. We will warn
+// the users by logging that information.
+//
+// 4. And the most important reason, heartbeats are not blocked even if
+// this thread does not run, they will go into the processing queue.
+scheduleNextHealthCheck();
+
+return;
 
 Review comment:
   We don't need this return statement.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #852: HDDS-1454. GC other system pause events can trigger pipeline destroy for all the nodes in the cluster. Contributed by Suprat

2019-06-18 Thread GitBox
nandakumar131 commented on a change in pull request #852: HDDS-1454. GC other 
system pause events can trigger pipeline destroy for all the nodes in the 
cluster. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/852#discussion_r294698496
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java
 ##
 @@ -339,6 +341,96 @@ public void testScmDetectStaleAndDeadNode()
 }
   }
 
+  /**
+   * Simulate a JVM Pause by pausing the health check process
+   * Ensure that none of the nodes with heartbeats become Dead or Stale.
+   * @throws IOException
+   * @throws InterruptedException
+   * @throws AuthenticationException
+   */
+  @Test
+  public void testScmHandleJvmPause()
+  throws IOException, InterruptedException, AuthenticationException {
+final int healthCheckInterval = 200; // milliseconds
+final int heartbeatInterval = 1; // seconds
+final int staleNodeInterval = 3; // seconds
+final int deadNodeInterval = 6; // seconds
+ScheduledFuture schedFuture;
 
 Review comment:
   NIT: typo `schedFuture`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ehiggs commented on a change in pull request #743: HADOOP-11452 make rename/3 public

2019-06-18 Thread GitBox
ehiggs commented on a change in pull request #743: HADOOP-11452 make rename/3 
public
URL: https://github.com/apache/hadoop/pull/743#discussion_r294688267
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSExceptionMessages.java
 ##
 @@ -51,4 +51,86 @@
 
   public static final String PERMISSION_DENIED_BY_STICKY_BIT =
   "Permission denied by sticky bit";
+
+  /**
+   * Renaming a destination under source is forbidden.
+   * This is a format string.
+   */
+  public static final String RENAME_DEST_UNDER_SOURCE =
+  "Rename destination %s is a directory or file under source %s";
+
+  /**
+   * Renaming a destination to source is forbidden.
+   * This is a format string.
+   */
+  public static final String RENAME_DEST_EQUALS_SOURCE =
+  "The source %s and destination %s are the same";
+
+  /**
+   * Renaming to root is forbidden.
+   */
+  public static final String RENAME_DEST_IS_ROOT =
+  "Rename destination cannot be the root";
+
+  /**
+   * The parent of a rename destination is not found.
+   * This is a format string.
+   */
+  public static final String RENAME_DEST_NO_PARENT_OF =
+  "Rename destination parent of %s not found";
+
+  /**
+   * The parent of a rename destination is not found.
+   * This is a format string, taking the parent path of the destination
+   */
+  public static final String RENAME_DEST_NO_PARENT =
+  "Rename destination parent %s not found";
+
+  /**
+   * The parent of a rename destination is not a directory.
+   * This is a format string.
+   */
+  public static final String RENAME_DEST_PARENT_NOT_DIRECTORY =
+  "Rename destination parent %s is a file";
+
+  /**
+   * The rename destination is not an empty directory.
+   * This is a format string.
+   */
+  public static final String RENAME_DEST_NOT_EMPTY =
+  "Rename destination directory is not empty: %s";
+
+  /**
+   * The rename destination is not an empty directory.
+   * This is a format string.
+   */
+  public static final String RENAME_DEST_EXISTS =
+  "Rename destination %s already exists";
+
+  /**
+   * The rename source doesn't exist.
+   * This is a format string.
+   */
+  public static final String RENAME_SOURCE_NOT_FOUND =
+  "Rename source %s is not found";
+
+  /**
+   * The rename source and dest are off different types
 
 Review comment:
   ```suggestion
  * The rename source and dest are of different types
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >