[jira] [Updated] (HADOOP-16430) S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16430:
---
Fix Version/s: 3.3.0

Since this is committed to trunk, set fix version to 3.3.0.

Hi [~ste...@apache.org], this broke mvn javadoc:javadoc in hadoop-aws module. 
Would you check HADOOP-16554?

> S3AFilesystem.delete to incrementally update s3guard with deletions
> ---
>
> Key: HADOOP-16430
> URL: https://issues.apache.org/jira/browse/HADOOP-16430
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: Screenshot 2019-07-16 at 22.08.31.png
>
>
> Currently S3AFilesystem.delete() only updates the delete at the end of a 
> paged delete operation. This makes it slow when there are many thousands of 
> files to delete ,and increases the window of vulnerability to failures
> Preferred
> * after every bulk DELETE call is issued to S3, queue the (async) delete of 
> all entries in that post.
> * at the end of the delete, await the completion of these operations.
> * inside S3AFS, also do the delete across threads, so that different HTTPS 
> connections can be used.
> This should maximise DDB throughput against tables which aren't IO limited.
> When executed against small IOP limited tables, the parallel DDB DELETE 
> batches will trigger a lot of throttling events; we should make sure these 
> aren't going to trigger failures



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16430) S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16430.

Resolution: Fixed

> S3AFilesystem.delete to incrementally update s3guard with deletions
> ---
>
> Key: HADOOP-16430
> URL: https://issues.apache.org/jira/browse/HADOOP-16430
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: Screenshot 2019-07-16 at 22.08.31.png
>
>
> Currently S3AFilesystem.delete() only updates the delete at the end of a 
> paged delete operation. This makes it slow when there are many thousands of 
> files to delete ,and increases the window of vulnerability to failures
> Preferred
> * after every bulk DELETE call is issued to S3, queue the (async) delete of 
> all entries in that post.
> * at the end of the delete, await the completion of these operations.
> * inside S3AFS, also do the delete across threads, so that different HTTPS 
> connections can be used.
> This should maximise DDB throughput against tables which aren't IO limited.
> When executed against small IOP limited tables, the parallel DDB DELETE 
> batches will trigger a lot of throttling events; we should make sure these 
> aren't going to trigger failures



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16554) mvn javadoc:javadoc fails in hadoop-aws

2019-09-09 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-16554:
--

 Summary: mvn javadoc:javadoc fails in hadoop-aws
 Key: HADOOP-16554
 URL: https://issues.apache.org/jira/browse/HADOOP-16554
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Akira Ajisaka


mvn javadoc:javadoc fails in hadoop-aws module.
{noformat}
[ERROR] 
/(snip)/hadoop-mirror/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/CommitOperations.java:619:
 error: reference not found
[ERROR]  * See {@link CommitOperations#revertCommit(SinglePendingCommit)}.
[ERROR]   ^
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng opened a new pull request #1418: HDDS-2089: Add createPipeline CLI.

2019-09-09 Thread GitBox
timmylicheng opened a new pull request #1418: HDDS-2089: Add createPipeline CLI.
URL: https://github.com/apache/hadoop/pull/1418
 
 
   #HDDS-2089 Add createPipeline for ozone scmcli
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16549) Remove Unsupported SSL/TLS Versions from Docs/Properties

2019-09-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16926282#comment-16926282
 ] 

Hudson commented on HADOOP-16549:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17266 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17266/])
HADOOP-16549. Remove Unsupported SSL/TLS Versions from Docs/Properties. 
(weichiu: rev bc2d3a71d6e09310d1e49e4e31433304c76e6701)
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/EncryptedShuffle.md


> Remove Unsupported SSL/TLS Versions from Docs/Properties
> 
>
> Key: HADOOP-16549
> URL: https://issues.apache.org/jira/browse/HADOOP-16549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 3.3.0
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16549.001.patch
>
>
> We should remove the following unsupported versions from docs and 
> core-default.xml appropriately.
> TLS v1.0
> TLS v1.1
> SSL v3
> SSLv2Hello
> ref: 
> https://www.eclipse.org/jetty/documentation/9.3.27.v20190418/configuring-ssl.html
> https://github.com/eclipse/jetty.project/issues/866
> [~aajisaka], I happened to find you left TLSv1.1 in 
> https://issues.apache.org/jira/browse/HADOOP-16000. Should we still keep it?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16549) Remove Unsupported SSL/TLS Versions from Docs/Properties

2019-09-09 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16549:
-
Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~daisuke.kobayashi] and thanks [~aajisaka]!

> Remove Unsupported SSL/TLS Versions from Docs/Properties
> 
>
> Key: HADOOP-16549
> URL: https://issues.apache.org/jira/browse/HADOOP-16549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 3.3.0
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16549.001.patch
>
>
> We should remove the following unsupported versions from docs and 
> core-default.xml appropriately.
> TLS v1.0
> TLS v1.1
> SSL v3
> SSLv2Hello
> ref: 
> https://www.eclipse.org/jetty/documentation/9.3.27.v20190418/configuring-ssl.html
> https://github.com/eclipse/jetty.project/issues/866
> [~aajisaka], I happened to find you left TLSv1.1 in 
> https://issues.apache.org/jira/browse/HADOOP-16000. Should we still keep it?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16549) Remove Unsupported SSL/TLS Versions from Docs/Properties

2019-09-09 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16926266#comment-16926266
 ] 

Wei-Chiu Chuang commented on HADOOP-16549:
--

+1 will commit this shortly. Set target version 3.3.0.
For HADOOP-16152, it's a pretty tricky change actually. I'm hoping to get it 
into 3.3.0 but not sure.

> Remove Unsupported SSL/TLS Versions from Docs/Properties
> 
>
> Key: HADOOP-16549
> URL: https://issues.apache.org/jira/browse/HADOOP-16549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 3.3.0
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HADOOP-16549.001.patch
>
>
> We should remove the following unsupported versions from docs and 
> core-default.xml appropriately.
> TLS v1.0
> TLS v1.1
> SSL v3
> SSLv2Hello
> ref: 
> https://www.eclipse.org/jetty/documentation/9.3.27.v20190418/configuring-ssl.html
> https://github.com/eclipse/jetty.project/issues/866
> [~aajisaka], I happened to find you left TLSv1.1 in 
> https://issues.apache.org/jira/browse/HADOOP-16000. Should we still keep it?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16549) Remove Unsupported SSL/TLS Versions from Docs/Properties

2019-09-09 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16549:
-
Affects Version/s: 3.3.0

> Remove Unsupported SSL/TLS Versions from Docs/Properties
> 
>
> Key: HADOOP-16549
> URL: https://issues.apache.org/jira/browse/HADOOP-16549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 3.3.0
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HADOOP-16549.001.patch
>
>
> We should remove the following unsupported versions from docs and 
> core-default.xml appropriately.
> TLS v1.0
> TLS v1.1
> SSL v3
> SSLv2Hello
> ref: 
> https://www.eclipse.org/jetty/documentation/9.3.27.v20190418/configuring-ssl.html
> https://github.com/eclipse/jetty.project/issues/866
> [~aajisaka], I happened to find you left TLSv1.1 in 
> https://issues.apache.org/jira/browse/HADOOP-16000. Should we still keep it?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on issue #1414: HDFS-14835. RBF: Secured Router should not run when it can't initialize DelegationTokenSecretManager

2019-09-09 Thread GitBox
tasanuma commented on issue #1414: HDFS-14835. RBF: Secured Router should not 
run when it can't initialize DelegationTokenSecretManager
URL: https://github.com/apache/hadoop/pull/1414#issuecomment-529730451
 
 
   @goiri @chittshota Thanks for your reviews.
   
   In this PR, the error occurs only when the authentication method is 
kerberos. Given that YARN uses delegation tokens to communicate with HDFS when 
the authentication method is  kerberos, I think kerberized router should fail 
to start if secret manager is broken.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lqjack commented on issue #1037: HADOOP-15847 limit the r/w capacity

2019-09-09 Thread GitBox
lqjack commented on issue #1037: HADOOP-15847 limit the r/w capacity 
URL: https://github.com/apache/hadoop/pull/1037#issuecomment-529726830
 
 
   @steveloughran  email address : lqjack...@gmail.com, thanks a lot. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s…

2019-09-09 Thread GitBox
hadoop-yetus commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot 
should delete previously created s…
URL: https://github.com/apache/hadoop/pull/1163#issuecomment-529723367
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 82 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 637 | trunk passed |
   | +1 | compile | 372 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 954 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | trunk passed |
   | 0 | spotbugs | 436 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 638 | trunk passed |
   | -0 | patch | 479 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 547 | the patch passed |
   | +1 | compile | 377 | the patch passed |
   | +1 | javac | 377 | the patch passed |
   | +1 | checkstyle | 79 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 756 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 191 | the patch passed |
   | +1 | findbugs | 657 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 314 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2269 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 8333 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1163 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 06da123aa662 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 650c4ce |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/7/testReport/ |
   | Max. process+thread count | 4692 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot should delete previously created s…

2019-09-09 Thread GitBox
hadoop-yetus commented on issue #1163: HDDS-1786 : Datanodes takeSnapshot 
should delete previously created s…
URL: https://github.com/apache/hadoop/pull/1163#issuecomment-529721393
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 623 | trunk passed |
   | +1 | compile | 392 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 854 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | trunk passed |
   | 0 | spotbugs | 443 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 649 | trunk passed |
   | -0 | patch | 491 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 552 | the patch passed |
   | +1 | compile | 396 | the patch passed |
   | +1 | javac | 396 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 654 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | the patch passed |
   | +1 | findbugs | 653 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 295 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1996 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 7864 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.container.TestContainerReplication |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1163 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d74699a82149 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 650c4ce |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/6/testReport/ |
   | Max. process+thread count | 5268 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1163/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16530) Update xercesImpl in branch-2

2019-09-09 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16926209#comment-16926209
 ] 

Masatake Iwasaki commented on HADOOP-16530:
---

[~jhung] yes. I'm going to commit this today.

> Update xercesImpl in branch-2
> -
>
> Key: HADOOP-16530
> URL: https://issues.apache.org/jira/browse/HADOOP-16530
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16530.branch-2.001.patch
>
>
> Hadoop 2 depends on xercesImpl 2.9.1, which is more than 10 years old. The 
> latest version is 2.12.0, released last year Let's update this dependency.
> HDFS-12221 removed xercesImpl in Hadoop 3. Looking at HDFS-12221, the impact 
> of this dependency is very minimal: only used by offlineimageviewer. 
> TestOfflineEditsViewer passed for me after the update. Not sure about the 
> impact of downstream applications though.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16549) Remove Unsupported SSL/TLS Versions from Docs/Properties

2019-09-09 Thread Daisuke Kobayashi (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16926203#comment-16926203
 ] 

Daisuke Kobayashi commented on HADOOP-16549:


That makes sense. So, to which version should we set the fix version of this 
jira? [~jojochuang]

> Remove Unsupported SSL/TLS Versions from Docs/Properties
> 
>
> Key: HADOOP-16549
> URL: https://issues.apache.org/jira/browse/HADOOP-16549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HADOOP-16549.001.patch
>
>
> We should remove the following unsupported versions from docs and 
> core-default.xml appropriately.
> TLS v1.0
> TLS v1.1
> SSL v3
> SSLv2Hello
> ref: 
> https://www.eclipse.org/jetty/documentation/9.3.27.v20190418/configuring-ssl.html
> https://github.com/eclipse/jetty.project/issues/866
> [~aajisaka], I happened to find you left TLSv1.1 in 
> https://issues.apache.org/jira/browse/HADOOP-16000. Should we still keep it?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-11866) increase readability and reliability of checkstyle, shellcheck, and whitespace reports

2019-09-09 Thread Allen Wittenauer (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14521847#comment-14521847
 ] 

Allen Wittenauer edited comment on HADOOP-11866 at 9/10/19 12:10 AM:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | release audit |   0m 16s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:blue}0{color} | shellcheck |   0m 16s | Shellcheck was not available. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 35s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12729531/HADOOP-11866-07.patch |
| Optional Tests | shellcheck |
| git revision | trunk / de9404f |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6228/console |


This message was automatically generated.


was (Author: hadoopqa):
\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | release audit |   0m 16s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:blue}0{color} | shellcheck |   0m 16s | Shellcheck was not available. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 19s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12729531/HADOOP-11866-07.patch |
| Optional Tests | shellcheck |
| git revision | trunk / de9404f |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6228/console |


This message was automatically generated.

> increase readability and reliability of checkstyle, shellcheck, and 
> whitespace reports
> --
>
> Key: HADOOP-11866
> URL: https://issues.apache.org/jira/browse/HADOOP-11866
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Naganarasimha G R
>Assignee: Allen Wittenauer
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-11866-05.patch, HADOOP-11866-06.patch, 
> HADOOP-11866-07.patch, HADOOP-11866-08.patch, HADOOP-11866-checkstyle.patch, 
> HADOOP-11866.20150422-1.patch, HADOOP-11866.20150423-1.patch, 
> HADOOP-11866.20150427-1.patch
>
>
> HADOOP-11746 supports listing of the lines which has trailing white spaces 
> but doesn't inform patch line number. Without this report output will not be 
> of much help as in most cases it reports blank lines. Also for the first 
> timers it would be difficult to understand the output check style script 
> hence adding an header



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #568: HADOOP-15691 Add PathCapabilities to FS and FC to complement StreamCapabilities

2019-09-09 Thread GitBox
hadoop-yetus commented on issue #568: HADOOP-15691 Add PathCapabilities to FS 
and FC to complement StreamCapabilities
URL: https://github.com/apache/hadoop/pull/568#issuecomment-529708994
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 3275 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1246 | trunk passed |
   | +1 | compile | 1077 | trunk passed |
   | +1 | checkstyle | 171 | trunk passed |
   | +1 | mvnsite | 292 | trunk passed |
   | +1 | shadedclient | 1275 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 256 | trunk passed |
   | 0 | spotbugs | 51 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 616 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 231 | the patch passed |
   | +1 | compile | 1268 | the patch passed |
   | -1 | javac | 1268 | root generated 1 new + 1468 unchanged - 0 fixed = 1469 
total (was 1468) |
   | -0 | checkstyle | 191 | root: The patch generated 16 new + 600 unchanged - 
0 fixed = 616 total (was 600) |
   | +1 | mvnsite | 331 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 828 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 248 | the patch passed |
   | +1 | findbugs | 654 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 636 | hadoop-common in the patch failed. |
   | +1 | unit | 149 | hadoop-hdfs-client in the patch passed. |
   | +1 | unit | 335 | hadoop-hdfs-httpfs in the patch passed. |
   | +1 | unit | 112 | hadoop-aws in the patch passed. |
   | +1 | unit | 86 | hadoop-azure in the patch passed. |
   | -1 | unit | 66 | hadoop-azure-datalake in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 13390 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ipc.TestCallQueueManager |
   |   | hadoop.fs.adl.live.TestAdlSdkConfiguration |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/568 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3fa4caa9c9aa 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 469165e |
   | Default Java | 1.8.0_222 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/7/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/7/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/7/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/7/artifact/out/patch-unit-hadoop-tools_hadoop-azure-datalake.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/7/testReport/ |
   | Max. process+thread count | 609 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs-httpfs 
hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure 
hadoop-tools/hadoop-azure-datalake U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao merged pull request #1373: HDDS-2053. Fix TestOzoneManagerRatisServer failure. Contributed by Xi…

2019-09-09 Thread GitBox
xiaoyuyao merged pull request #1373: HDDS-2053. Fix TestOzoneManagerRatisServer 
failure. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/1373
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shwetayakkali opened a new pull request #1417: Hdds 2044 new

2019-09-09 Thread GitBox
shwetayakkali opened a new pull request #1417: Hdds 2044 new
URL: https://github.com/apache/hadoop/pull/1417
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #1373: HDDS-2053. Fix TestOzoneManagerRatisServer failure. Contributed by Xi…

2019-09-09 Thread GitBox
hanishakoneru commented on issue #1373: HDDS-2053. Fix 
TestOzoneManagerRatisServer failure. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/1373#issuecomment-529688386
 
 
   Change LGTM. +1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322474605
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsckViolationHandler.java
 ##
 @@ -0,0 +1,315 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.lang.reflect.InvocationTargetException;
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ * Violation handler for the S3Guard's fsck.
+ */
+public class S3GuardFsckViolationHandler {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  S3GuardFsckViolationHandler.class);
+
+  // The rawFS and metadataStore are here to prepare when the ViolationHandlers
+  // will not just log, but fix the violations, so they will have access.
+  private S3AFileSystem rawFs;
+  private DynamoDBMetadataStore metadataStore;
+
+  private static String newLine = System.getProperty("line.separator");
+
+  public S3GuardFsckViolationHandler(S3AFileSystem fs,
+  DynamoDBMetadataStore ddbms) {
+
+this.metadataStore = ddbms;
+this.rawFs = fs;
+  }
+
+  public void handle(S3GuardFsck.ComparePair comparePair) {
+if (!comparePair.containsViolation()) {
+  LOG.debug("There is no violation in the compare pair: " + toString());
+  return;
+}
+
+StringBuilder sB = new StringBuilder();
+sB.append(newLine)
+.append("On path: ").append(comparePair.getPath()).append(newLine);
+
+// Create a new instance of the handler and use it.
+for (S3GuardFsck.Violation violation : comparePair.getViolations()) {
+  try {
+ViolationHandler handler = violation.getHandler()
+.getDeclaredConstructor(S3GuardFsck.ComparePair.class)
+.newInstance(comparePair);
+final String errorStr = handler.getError();
+sB.append(errorStr);
+  } catch (NoSuchMethodException e) {
+LOG.error("Can not find declared constructor for handler: {}",
+violation.getHandler());
+  } catch (IllegalAccessException | InstantiationException | 
InvocationTargetException e) {
+LOG.error("Can not instantiate handler: {}",
+violation.getHandler());
+  }
+  sB.append(newLine);
+}
+LOG.error(sB.toString());
+  }
+
+  /**
+   * Violation handler abstract class.
+   * This class should be extended for violation handlers.
+   */
+  public static abstract class ViolationHandler {
+private final PathMetadata pathMetadata;
+private final S3AFileStatus s3FileStatus;
+private final S3AFileStatus msFileStatus;
+private final List s3DirListing;
+private final DirListingMetadata msDirListing;
+
+public ViolationHandler(S3GuardFsck.ComparePair comparePair) {
+  pathMetadata = comparePair.getMsPathMetadata();
+  s3FileStatus = comparePair.getS3FileStatus();
+  if (pathMetadata != null) {
+msFileStatus = pathMetadata.getFileStatus();
+  } else {
+msFileStatus = null;
+  }
+  s3DirListing = comparePair.getS3DirListing();
+  msDirListing = comparePair.getMsDirListing();
+}
+
+abstract String getError();
+
+public PathMetadata getPathMetadata() {
+  return pathMetadata;
+}
+
+public S3AFileStatus getS3FileStatus() {
+  return s3FileStatus;
+}
+
+public S3AFileStatus getMsFileStatus() {
+  return msFileStatus;
+}
+
+public List getS3DirListing() {
+  return s3DirListing;
+}
+
+public DirListingMetadata getMsDirListing() {
+  return msDirListing;
+}
+  }
+
+  /**
+   * The violation handler when there's no matching metadata entry in the MS.
+   */
+  public static class NoMetadataEntry extends ViolationHandler {
+
+public NoMetadataEntry(S3GuardFsck.ComparePair comparePair) {
+  super(comparePair);
+}
+
+@Override
+public String getError() {
+  

[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322474605
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsckViolationHandler.java
 ##
 @@ -0,0 +1,315 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.lang.reflect.InvocationTargetException;
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ * Violation handler for the S3Guard's fsck.
+ */
+public class S3GuardFsckViolationHandler {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  S3GuardFsckViolationHandler.class);
+
+  // The rawFS and metadataStore are here to prepare when the ViolationHandlers
+  // will not just log, but fix the violations, so they will have access.
+  private S3AFileSystem rawFs;
+  private DynamoDBMetadataStore metadataStore;
+
+  private static String newLine = System.getProperty("line.separator");
+
+  public S3GuardFsckViolationHandler(S3AFileSystem fs,
+  DynamoDBMetadataStore ddbms) {
+
+this.metadataStore = ddbms;
+this.rawFs = fs;
+  }
+
+  public void handle(S3GuardFsck.ComparePair comparePair) {
+if (!comparePair.containsViolation()) {
+  LOG.debug("There is no violation in the compare pair: " + toString());
+  return;
+}
+
+StringBuilder sB = new StringBuilder();
+sB.append(newLine)
+.append("On path: ").append(comparePair.getPath()).append(newLine);
+
+// Create a new instance of the handler and use it.
+for (S3GuardFsck.Violation violation : comparePair.getViolations()) {
+  try {
+ViolationHandler handler = violation.getHandler()
+.getDeclaredConstructor(S3GuardFsck.ComparePair.class)
+.newInstance(comparePair);
+final String errorStr = handler.getError();
+sB.append(errorStr);
+  } catch (NoSuchMethodException e) {
+LOG.error("Can not find declared constructor for handler: {}",
+violation.getHandler());
+  } catch (IllegalAccessException | InstantiationException | 
InvocationTargetException e) {
+LOG.error("Can not instantiate handler: {}",
+violation.getHandler());
+  }
+  sB.append(newLine);
+}
+LOG.error(sB.toString());
+  }
+
+  /**
+   * Violation handler abstract class.
+   * This class should be extended for violation handlers.
+   */
+  public static abstract class ViolationHandler {
+private final PathMetadata pathMetadata;
+private final S3AFileStatus s3FileStatus;
+private final S3AFileStatus msFileStatus;
+private final List s3DirListing;
+private final DirListingMetadata msDirListing;
+
+public ViolationHandler(S3GuardFsck.ComparePair comparePair) {
+  pathMetadata = comparePair.getMsPathMetadata();
+  s3FileStatus = comparePair.getS3FileStatus();
+  if (pathMetadata != null) {
+msFileStatus = pathMetadata.getFileStatus();
+  } else {
+msFileStatus = null;
+  }
+  s3DirListing = comparePair.getS3DirListing();
+  msDirListing = comparePair.getMsDirListing();
+}
+
+abstract String getError();
+
+public PathMetadata getPathMetadata() {
+  return pathMetadata;
+}
+
+public S3AFileStatus getS3FileStatus() {
+  return s3FileStatus;
+}
+
+public S3AFileStatus getMsFileStatus() {
+  return msFileStatus;
+}
+
+public List getS3DirListing() {
+  return s3DirListing;
+}
+
+public DirListingMetadata getMsDirListing() {
+  return msDirListing;
+}
+  }
+
+  /**
+   * The violation handler when there's no matching metadata entry in the MS.
+   */
+  public static class NoMetadataEntry extends ViolationHandler {
+
+public NoMetadataEntry(S3GuardFsck.ComparePair comparePair) {
+  super(comparePair);
+}
+
+@Override
+public String getError() {
+  

[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322474178
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsckViolationHandler.java
 ##
 @@ -0,0 +1,315 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.lang.reflect.InvocationTargetException;
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ * Violation handler for the S3Guard's fsck.
+ */
+public class S3GuardFsckViolationHandler {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  S3GuardFsckViolationHandler.class);
+
+  // The rawFS and metadataStore are here to prepare when the ViolationHandlers
+  // will not just log, but fix the violations, so they will have access.
+  private S3AFileSystem rawFs;
+  private DynamoDBMetadataStore metadataStore;
+
+  private static String newLine = System.getProperty("line.separator");
+
+  public S3GuardFsckViolationHandler(S3AFileSystem fs,
+  DynamoDBMetadataStore ddbms) {
+
+this.metadataStore = ddbms;
+this.rawFs = fs;
+  }
+
+  public void handle(S3GuardFsck.ComparePair comparePair) {
+if (!comparePair.containsViolation()) {
+  LOG.debug("There is no violation in the compare pair: " + toString());
+  return;
+}
+
+StringBuilder sB = new StringBuilder();
+sB.append(newLine)
+.append("On path: ").append(comparePair.getPath()).append(newLine);
+
+// Create a new instance of the handler and use it.
+for (S3GuardFsck.Violation violation : comparePair.getViolations()) {
+  try {
+ViolationHandler handler = violation.getHandler()
+.getDeclaredConstructor(S3GuardFsck.ComparePair.class)
+.newInstance(comparePair);
+final String errorStr = handler.getError();
+sB.append(errorStr);
+  } catch (NoSuchMethodException e) {
+LOG.error("Can not find declared constructor for handler: {}",
+violation.getHandler());
+  } catch (IllegalAccessException | InstantiationException | 
InvocationTargetException e) {
+LOG.error("Can not instantiate handler: {}",
+violation.getHandler());
+  }
+  sB.append(newLine);
+}
+LOG.error(sB.toString());
 
 Review comment:
   we will change it, we will do it in a different jira though: 
https://issues.apache.org/jira/browse/HADOOP-16507


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322473774
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsckViolationHandler.java
 ##
 @@ -0,0 +1,315 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.lang.reflect.InvocationTargetException;
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ * Violation handler for the S3Guard's fsck.
+ */
+public class S3GuardFsckViolationHandler {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  S3GuardFsckViolationHandler.class);
+
+  // The rawFS and metadataStore are here to prepare when the ViolationHandlers
+  // will not just log, but fix the violations, so they will have access.
+  private S3AFileSystem rawFs;
+  private DynamoDBMetadataStore metadataStore;
+
+  private static String newLine = System.getProperty("line.separator");
+
+  public S3GuardFsckViolationHandler(S3AFileSystem fs,
+  DynamoDBMetadataStore ddbms) {
+
+this.metadataStore = ddbms;
+this.rawFs = fs;
+  }
+
+  public void handle(S3GuardFsck.ComparePair comparePair) {
+if (!comparePair.containsViolation()) {
+  LOG.debug("There is no violation in the compare pair: " + toString());
 
 Review comment:
   it was only a toString, which called this.toString(), so this was 
extraordinarily ugly by itself.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322470821
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsck.java
 ##
 @@ -0,0 +1,421 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.AWSBadRequestException;
+import org.apache.hadoop.fs.s3a.S3AFileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+
+import com.google.common.base.Stopwatch;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.security.InvalidParameterException;
+import java.util.ArrayDeque;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+import static java.util.stream.Collectors.toList;
+import static java.util.stream.Collectors.toSet;
+
+/**
+ * Main class for the FSCK factored out from S3GuardTool
+ * The implementation uses fixed DynamoDBMetadataStore as the backing store
+ * for metadata.
+ *
+ * Functions:
+ * 
+ *   Checking metadata consistency between S3 and metadatastore
+ * 
+ */
+public class S3GuardFsck {
+  private static final Logger LOG = LoggerFactory.getLogger(S3GuardFsck.class);
+  public static final String ROOT_PATH_STRING = "/";
+
+  private S3AFileSystem rawFS;
+  private DynamoDBMetadataStore metadataStore;
+
+  /**
+   * Creates an S3GuardFsck.
+   * @param fs the filesystem to compare to
+   * @param ms metadatastore the metadatastore to compare with (dynamo)
+   */
+  S3GuardFsck(S3AFileSystem fs, MetadataStore ms)
+  throws InvalidParameterException {
+this.rawFS = fs;
+
+if (ms == null) {
+  throw new InvalidParameterException("S3AFileSystem should be guarded by"
+  + " a " + DynamoDBMetadataStore.class.getCanonicalName());
+}
+this.metadataStore = (DynamoDBMetadataStore) ms;
+
+if (rawFS.hasMetadataStore()) {
+  throw new InvalidParameterException("Raw fs should not have a "
+  + "metadatastore.");
+}
+  }
+
+  /**
+   * Compares S3 to MS.
+   * Iterative breadth first walk on the S3 structure from a given root.
+   * Creates a list of pairs (metadata in S3 and in the MetadataStore) where
+   * the consistency or any rule is violated.
+   * Uses {@link S3GuardFsckViolationHandler} to handle violations.
+   * The violations are listed in Enums: {@link Violation}
+   *
+   * @param p the root path to start the traversal
+   * @throws IOException
+   * @return a list of {@link ComparePair}
+   */
+  public List compareS3ToMs(Path p) throws IOException {
+Stopwatch stopwatch = Stopwatch.createStarted();
+int scannedItems = 0;
+
+final Path rootPath = rawFS.qualify(p);
+S3AFileStatus root = null;
+try {
+  root = (S3AFileStatus) rawFS.getFileStatus(rootPath);
+} catch (AWSBadRequestException e) {
+  throw new IOException(e.getMessage());
+}
+final List comparePairs = new ArrayList<>();
+final Queue queue = new ArrayDeque<>();
+queue.add(root);
+
+while (!queue.isEmpty()) {
+  final S3AFileStatus currentDir = queue.poll();
+  scannedItems++;
+
+  final Path currentDirPath = currentDir.getPath();
+  List s3DirListing = 
Arrays.asList(rawFS.listStatus(currentDirPath));
+
+  // DIRECTORIES
+  // Check directory authoritativeness consistency
+  compareAuthoritativeDirectoryFlag(comparePairs, currentDirPath, 
s3DirListing);
+  // Add all descendant directory to the queue
+  s3DirListing.stream().filter(pm -> pm.isDirectory())
+  .map(S3AFileStatus.class::cast)
+  .forEach(pm -> queue.add(pm));
+
+  // FILES
+  // check files for consistency
+  final List children = s3DirListing.stream()
+  .filter(status -> !status.isDirectory())
+  .map(S3AFileStatus.class::cast).collect(toList());
+  

[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322469410
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsck.java
 ##
 @@ -0,0 +1,421 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.AWSBadRequestException;
+import org.apache.hadoop.fs.s3a.S3AFileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+
+import com.google.common.base.Stopwatch;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.security.InvalidParameterException;
+import java.util.ArrayDeque;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+import static java.util.stream.Collectors.toList;
+import static java.util.stream.Collectors.toSet;
+
+/**
+ * Main class for the FSCK factored out from S3GuardTool
+ * The implementation uses fixed DynamoDBMetadataStore as the backing store
+ * for metadata.
+ *
+ * Functions:
+ * 
+ *   Checking metadata consistency between S3 and metadatastore
+ * 
+ */
+public class S3GuardFsck {
+  private static final Logger LOG = LoggerFactory.getLogger(S3GuardFsck.class);
+  public static final String ROOT_PATH_STRING = "/";
+
+  private S3AFileSystem rawFS;
+  private DynamoDBMetadataStore metadataStore;
+
+  /**
+   * Creates an S3GuardFsck.
+   * @param fs the filesystem to compare to
+   * @param ms metadatastore the metadatastore to compare with (dynamo)
+   */
+  S3GuardFsck(S3AFileSystem fs, MetadataStore ms)
+  throws InvalidParameterException {
+this.rawFS = fs;
+
+if (ms == null) {
+  throw new InvalidParameterException("S3AFileSystem should be guarded by"
+  + " a " + DynamoDBMetadataStore.class.getCanonicalName());
+}
+this.metadataStore = (DynamoDBMetadataStore) ms;
+
+if (rawFS.hasMetadataStore()) {
+  throw new InvalidParameterException("Raw fs should not have a "
+  + "metadatastore.");
+}
+  }
+
+  /**
+   * Compares S3 to MS.
+   * Iterative breadth first walk on the S3 structure from a given root.
+   * Creates a list of pairs (metadata in S3 and in the MetadataStore) where
+   * the consistency or any rule is violated.
+   * Uses {@link S3GuardFsckViolationHandler} to handle violations.
+   * The violations are listed in Enums: {@link Violation}
+   *
+   * @param p the root path to start the traversal
+   * @throws IOException
+   * @return a list of {@link ComparePair}
+   */
+  public List compareS3ToMs(Path p) throws IOException {
+Stopwatch stopwatch = Stopwatch.createStarted();
+int scannedItems = 0;
+
+final Path rootPath = rawFS.qualify(p);
+S3AFileStatus root = null;
+try {
+  root = (S3AFileStatus) rawFS.getFileStatus(rootPath);
+} catch (AWSBadRequestException e) {
+  throw new IOException(e.getMessage());
+}
+final List comparePairs = new ArrayList<>();
+final Queue queue = new ArrayDeque<>();
+queue.add(root);
+
+while (!queue.isEmpty()) {
+  final S3AFileStatus currentDir = queue.poll();
+  scannedItems++;
+
+  final Path currentDirPath = currentDir.getPath();
+  List s3DirListing = 
Arrays.asList(rawFS.listStatus(currentDirPath));
 
 Review comment:
   We just log with error and continue, right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322469181
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsck.java
 ##
 @@ -0,0 +1,421 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.AWSBadRequestException;
+import org.apache.hadoop.fs.s3a.S3AFileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+
+import com.google.common.base.Stopwatch;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.security.InvalidParameterException;
+import java.util.ArrayDeque;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+import static java.util.stream.Collectors.toList;
+import static java.util.stream.Collectors.toSet;
+
+/**
+ * Main class for the FSCK factored out from S3GuardTool
+ * The implementation uses fixed DynamoDBMetadataStore as the backing store
+ * for metadata.
+ *
+ * Functions:
+ * 
+ *   Checking metadata consistency between S3 and metadatastore
+ * 
+ */
+public class S3GuardFsck {
+  private static final Logger LOG = LoggerFactory.getLogger(S3GuardFsck.class);
+  public static final String ROOT_PATH_STRING = "/";
+
+  private S3AFileSystem rawFS;
+  private DynamoDBMetadataStore metadataStore;
+
+  /**
+   * Creates an S3GuardFsck.
+   * @param fs the filesystem to compare to
+   * @param ms metadatastore the metadatastore to compare with (dynamo)
+   */
+  S3GuardFsck(S3AFileSystem fs, MetadataStore ms)
+  throws InvalidParameterException {
+this.rawFS = fs;
+
+if (ms == null) {
+  throw new InvalidParameterException("S3AFileSystem should be guarded by"
+  + " a " + DynamoDBMetadataStore.class.getCanonicalName());
+}
+this.metadataStore = (DynamoDBMetadataStore) ms;
+
+if (rawFS.hasMetadataStore()) {
+  throw new InvalidParameterException("Raw fs should not have a "
+  + "metadatastore.");
+}
+  }
+
+  /**
+   * Compares S3 to MS.
+   * Iterative breadth first walk on the S3 structure from a given root.
+   * Creates a list of pairs (metadata in S3 and in the MetadataStore) where
+   * the consistency or any rule is violated.
+   * Uses {@link S3GuardFsckViolationHandler} to handle violations.
+   * The violations are listed in Enums: {@link Violation}
+   *
+   * @param p the root path to start the traversal
+   * @throws IOException
+   * @return a list of {@link ComparePair}
+   */
+  public List compareS3ToMs(Path p) throws IOException {
+Stopwatch stopwatch = Stopwatch.createStarted();
+int scannedItems = 0;
+
+final Path rootPath = rawFS.qualify(p);
+S3AFileStatus root = null;
+try {
+  root = (S3AFileStatus) rawFS.getFileStatus(rootPath);
+} catch (AWSBadRequestException e) {
+  throw new IOException(e.getMessage());
+}
+final List comparePairs = new ArrayList<>();
+final Queue queue = new ArrayDeque<>();
+queue.add(root);
+
+while (!queue.isEmpty()) {
+  final S3AFileStatus currentDir = queue.poll();
+  scannedItems++;
+
+  final Path currentDirPath = currentDir.getPath();
+  List s3DirListing = 
Arrays.asList(rawFS.listStatus(currentDirPath));
 
 Review comment:
   added


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322463385
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
 ##
 @@ -1485,6 +1486,93 @@ private void vprintln(PrintStream out, String format, 
Object...
 }
   }
 
+  /**
+   * Prune metadata that has not been modified recently.
 
 Review comment:
   It's like plagiarism. Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata 
consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-529676557
 
 
   Thank you very much for the review @steveloughran. It's really thorough, 
complete.
   To be honest the code was really rough in the edges - I wanted another round 
before the final review just to have the idea right, because if we agree on the 
design then I can remove things like a system.out from here and there. So 
usually I just leave these things there as long as the bigger concept is not 
settled.
   But usually, this is the way how these things will creep into the code and 
stay there. Like this time: I forgot to return to my PR and do my cleanup.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322463385
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
 ##
 @@ -1485,6 +1486,93 @@ private void vprintln(PrintStream out, String format, 
Object...
 }
   }
 
+  /**
+   * Prune metadata that has not been modified recently.
 
 Review comment:
   It's like plagiarism.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1415: HDDS-2075. Tracing in OzoneManager call is propagated with wrong parent

2019-09-09 Thread GitBox
xiaoyuyao commented on issue #1415: HDDS-2075. Tracing in OzoneManager call is 
propagated with wrong parent
URL: https://github.com/apache/hadoop/pull/1415#issuecomment-529673327
 
 
   LGTM, +1. Thanks @adoroszlai  for fixing this. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322460124
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardFsck.java
 ##
 @@ -0,0 +1,707 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+
+import java.net.URI;
+import java.util.List;
+import java.util.UUID;
+
+import org.apache.hadoop.io.IOUtils;
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.AbstractS3ATestBase;
+import org.apache.hadoop.fs.s3a.S3AFileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.assertj.core.api.Assertions;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.touch;
+import static org.apache.hadoop.fs.s3a.Constants.METADATASTORE_AUTHORITATIVE;
+import static org.apache.hadoop.fs.s3a.Constants.S3_METADATA_STORE_IMPL;
+import static org.apache.hadoop.fs.s3a.S3ATestUtils.awaitFileStatus;
+import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.metadataStorePersistsAuthoritativeBit;
+import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides;
+import static org.junit.Assume.assumeTrue;
+
+/**
+ * Integration tests for the S3Guard Fsck against a dyamodb backed metadata
+ * store.
+ */
+public class ITestS3GuardFsck extends AbstractS3ATestBase {
+
+  private S3AFileSystem guardedFs;
+  private S3AFileSystem rawFS;
+
+  private MetadataStore metadataStore;
+
+  @Before
+  public void setup() throws Exception {
+super.setup();
+S3AFileSystem fs = getFileSystem();
+// These test will fail if no ms
+assertTrue("FS needs to have a metadatastore.",
+fs.hasMetadataStore());
+assertTrue("Metadatastore should persist authoritative bit",
+metadataStorePersistsAuthoritativeBit(fs.getMetadataStore()));
+
+guardedFs = fs;
+metadataStore = fs.getMetadataStore();
+
+// create raw fs without s3guard
+rawFS = createUnguardedFS();
+assertFalse("Raw FS still has S3Guard " + rawFS,
+rawFS.hasMetadataStore());
+  }
+
+  @Override
+  public void teardown() throws Exception {
+if (guardedFs != null) {
+  IOUtils.cleanupWithLogger(LOG, guardedFs);
+}
+IOUtils.cleanupWithLogger(LOG, rawFS);
+super.teardown();
+  }
+
+  /**
+   * Create a test filesystem which is always unguarded.
+   * This filesystem MUST be closed in test teardown.
+   * @return the new FS
+   */
+  private S3AFileSystem createUnguardedFS() throws Exception {
+S3AFileSystem testFS = getFileSystem();
+Configuration config = new Configuration(testFS.getConf());
+URI uri = testFS.getUri();
+
+removeBaseAndBucketOverrides(uri.getHost(), config,
+S3_METADATA_STORE_IMPL);
+removeBaseAndBucketOverrides(uri.getHost(), config,
 
 Review comment:
   also merged those two into one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1416: HDDS-2102. HddsVolumeChecker should use java optional in place of Guava optional. Contributed by Mukul Kumar Singh.

2019-09-09 Thread GitBox
bharatviswa504 commented on issue #1416: HDDS-2102. HddsVolumeChecker should 
use java optional in place of Guava optional. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1416#issuecomment-529670900
 
 
   Thank You @mukul1987 for the contribution.
   I have committed this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1416: HDDS-2102. HddsVolumeChecker should use java optional in place of Guava optional. Contributed by Mukul Kumar Singh.

2019-09-09 Thread GitBox
bharatviswa504 merged pull request #1416: HDDS-2102. HddsVolumeChecker should 
use java optional in place of Guava optional. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1416
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322458525
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsckViolationHandler.java
 ##
 @@ -0,0 +1,312 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.lang.reflect.InvocationTargetException;
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ * Violation handler for the S3Guard's fsck.
+ */
+public class S3GuardFsckViolationHandler {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  S3GuardFsckViolationHandler.class);
+
+  private S3AFileSystem rawFs;
+  private DynamoDBMetadataStore metadataStore;
+  private static String newLine = System.getProperty("line.separator");
+
+  public S3GuardFsckViolationHandler(S3AFileSystem fs,
+  DynamoDBMetadataStore ddbms) {
+
+this.metadataStore = ddbms;
+this.rawFs = fs;
+  }
+
+  public void handle(S3GuardFsck.ComparePair comparePair) {
+if (!comparePair.containsViolation()) {
+  LOG.debug("There is no violation in the compare pair: " + toString());
+  return;
+}
+
+StringBuilder sB = new StringBuilder();
+sB.append(newLine)
+.append("On path: ").append(comparePair.getPath()).append(newLine);
+
+// Create a new instance of the handler and use it.
+for (S3GuardFsck.Violation violation : comparePair.getViolations()) {
+  try {
+ViolationHandler handler = violation.getHandler()
+.getDeclaredConstructor(S3GuardFsck.ComparePair.class)
+.newInstance(comparePair);
+final String errorStr = handler.getError();
+sB.append(errorStr);
+  } catch (NoSuchMethodException e) {
+LOG.error("Can not find declared constructor for handler: {}",
+violation.getHandler());
+  } catch (IllegalAccessException | InstantiationException | 
InvocationTargetException e) {
+LOG.error("Can not instantiate handler: {}",
+violation.getHandler());
+  }
+  sB.append(newLine);
+}
+LOG.error(sB.toString());
+  }
+
+  /**
+   * Violation handler abstract class.
+   * This class should be extended for violation handlers.
+   */
+  public static abstract class ViolationHandler {
+private final PathMetadata pathMetadata;
+private final S3AFileStatus s3FileStatus;
+private final S3AFileStatus msFileStatus;
+private final List s3DirListing;
+private final DirListingMetadata msDirListing;
+
+public ViolationHandler(S3GuardFsck.ComparePair comparePair) {
+  pathMetadata = comparePair.getMsPathMetadata();
+  s3FileStatus = comparePair.getS3FileStatus();
+  if (pathMetadata != null) {
+msFileStatus = pathMetadata.getFileStatus();
+  } else {
+msFileStatus = null;
+  }
+  s3DirListing = comparePair.getS3DirListing();
+  msDirListing = comparePair.getMsDirListing();
+}
+
+abstract String getError();
+
+public PathMetadata getPathMetadata() {
+  return pathMetadata;
+}
+
+public S3AFileStatus getS3FileStatus() {
+  return s3FileStatus;
+}
+
+public S3AFileStatus getMsFileStatus() {
+  return msFileStatus;
+}
+
+public List getS3DirListing() {
+  return s3DirListing;
+}
+
+public DirListingMetadata getMsDirListing() {
+  return msDirListing;
+}
+  }
+
+  /**
+   * The violation handler when there's no matching metadata entry in the MS.
+   */
+  public static class NoMetadataEntry extends ViolationHandler {
+
+public NoMetadataEntry(S3GuardFsck.ComparePair comparePair) {
+  super(comparePair);
+}
+
+@Override
+public String getError() {
+  return "No PathMetadata for this path in the MS.";
+}
+  }
+
+  /**
+   * The violation handler when there's no parent entry.
+   */
+  public static 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: 
Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-519456685
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1051 | trunk passed |
   | +1 | compile | 31 | trunk passed |
   | +1 | checkstyle | 22 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 681 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | trunk passed |
   | 0 | spotbugs | 53 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 50 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | -0 | checkstyle | 17 | hadoop-tools/hadoop-aws: The patch generated 63 new 
+ 27 unchanged - 0 fixed = 90 total (was 27) |
   | +1 | mvnsite | 29 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 725 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | -1 | findbugs | 62 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 289 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 3242 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  
org.apache.hadoop.fs.s3a.s3guard.S3GuardFsck$ComparePair.getS3DirListing() may 
expose internal representation by returning 
S3GuardFsck$ComparePair.s3DirListing  At S3GuardFsck.java:by returning 
S3GuardFsck$ComparePair.s3DirListing  At S3GuardFsck.java:[line 323] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ba71f4d854bd 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 00b5a27 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/6/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/6/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/6/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/6/testReport/ |
   | Max. process+thread count | 447 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: 
Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-528899723
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 109 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1373 | trunk passed |
   | +1 | compile | 38 | trunk passed |
   | +1 | checkstyle | 33 | trunk passed |
   | +1 | mvnsite | 42 | trunk passed |
   | +1 | shadedclient | 805 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | trunk passed |
   | 0 | spotbugs | 58 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 56 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | -0 | checkstyle | 19 | hadoop-tools/hadoop-aws: The patch generated 28 new 
+ 25 unchanged - 0 fixed = 53 total (was 25) |
   | +1 | mvnsite | 31 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 819 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | the patch passed |
   | +1 | findbugs | 62 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 72 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3686 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 291df9b12830 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d98c548 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/19/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/19/testReport/ |
   | Max. process+thread count | 315 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/19/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: 
Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-519874201
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1058 | trunk passed |
   | +1 | compile | 28 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 674 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | trunk passed |
   | 0 | spotbugs | 55 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 52 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 43 new 
+ 27 unchanged - 0 fixed = 70 total (was 27) |
   | +1 | mvnsite | 27 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 696 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | -1 | findbugs | 60 | hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 292 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3225 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  
org.apache.hadoop.fs.s3a.s3guard.S3GuardFsck$ComparePair.getS3DirListing() may 
expose internal representation by returning 
S3GuardFsck$ComparePair.s3DirListing  At S3GuardFsck.java:by returning 
S3GuardFsck$ComparePair.s3DirListing  At S3GuardFsck.java:[line 317] |
   |  |  Possible null pointer dereference of ms in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Fsck.run(String[], PrintStream)  
Dereferenced at S3GuardTool.java:ms in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Fsck.run(String[], PrintStream)  
Dereferenced at S3GuardTool.java:[line 1548] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux feb0004d9943 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f6fa865 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/7/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/7/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/7/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/7/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: 
Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-523897013
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1048 | trunk passed |
   | +1 | compile | 28 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 679 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | trunk passed |
   | 0 | spotbugs | 54 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 52 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 37 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | -0 | checkstyle | 17 | hadoop-tools/hadoop-aws: The patch generated 18 new 
+ 29 unchanged - 0 fixed = 47 total (was 29) |
   | +1 | mvnsite | 30 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 702 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | +1 | findbugs | 57 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 71 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 2985 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 17f0083ac6cf 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 69ddb36 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/13/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/13/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/13/testReport/ |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/13/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: 
Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-520406994
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 69 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1157 | trunk passed |
   | +1 | compile | 31 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 36 | trunk passed |
   | +1 | shadedclient | 793 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | trunk passed |
   | 0 | spotbugs | 57 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 54 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | -0 | checkstyle | 19 | hadoop-tools/hadoop-aws: The patch generated 43 new 
+ 29 unchanged - 0 fixed = 72 total (was 29) |
   | +1 | mvnsite | 30 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 825 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | -1 | findbugs | 63 | hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 81 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3394 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  
org.apache.hadoop.fs.s3a.s3guard.S3GuardFsck$ComparePair.getS3DirListing() may 
expose internal representation by returning 
S3GuardFsck$ComparePair.s3DirListing  At S3GuardFsck.java:by returning 
S3GuardFsck$ComparePair.s3DirListing  At S3GuardFsck.java:[line 317] |
   |  |  Possible null pointer dereference of ms in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Fsck.run(String[], PrintStream)  
Dereferenced at S3GuardTool.java:ms in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Fsck.run(String[], PrintStream)  
Dereferenced at S3GuardTool.java:[line 1550] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0c3c9d2a6ada 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / dfe772d |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/9/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/9/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/9/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/9/testReport/ |
   | Max. process+thread count | 339 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: 
Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-528052580
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1226 | trunk passed |
   | +1 | compile | 39 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 44 | trunk passed |
   | +1 | shadedclient | 965 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | trunk passed |
   | 0 | spotbugs | 69 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 67 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 38 | the patch passed |
   | +1 | compile | 31 | the patch passed |
   | +1 | javac | 31 | the patch passed |
   | -0 | checkstyle | 22 | hadoop-tools/hadoop-aws: The patch generated 26 new 
+ 29 unchanged - 0 fixed = 55 total (was 29) |
   | +1 | mvnsite | 36 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 973 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | the patch passed |
   | +1 | findbugs | 81 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 94 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 3860 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 67ff0b461ba1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 337e9b7 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/16/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/16/testReport/ |
   | Max. process+thread count | 358 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/16/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: 
Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-520403604
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1115 | trunk passed |
   | +1 | compile | 34 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 37 | trunk passed |
   | +1 | shadedclient | 703 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | trunk passed |
   | 0 | spotbugs | 58 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 55 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 43 new 
+ 29 unchanged - 0 fixed = 72 total (was 29) |
   | +1 | mvnsite | 28 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 756 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 24 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | -1 | findbugs | 65 | hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 80 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 3184 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  
org.apache.hadoop.fs.s3a.s3guard.S3GuardFsck$ComparePair.getS3DirListing() may 
expose internal representation by returning 
S3GuardFsck$ComparePair.s3DirListing  At S3GuardFsck.java:by returning 
S3GuardFsck$ComparePair.s3DirListing  At S3GuardFsck.java:[line 317] |
   |  |  Possible null pointer dereference of ms in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Fsck.run(String[], PrintStream)  
Dereferenced at S3GuardTool.java:ms in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Fsck.run(String[], PrintStream)  
Dereferenced at S3GuardTool.java:[line 1550] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a96a16943ad2 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / dfe772d |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/8/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/8/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/8/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/8/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: 
Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-528896491
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1226 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 38 | trunk passed |
   | +1 | shadedclient | 808 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | trunk passed |
   | 0 | spotbugs | 55 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 52 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 28 new 
+ 25 unchanged - 0 fixed = 53 total (was 25) |
   | +1 | mvnsite | 31 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 842 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | the patch passed |
   | +1 | findbugs | 73 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 98 | hadoop-aws in the patch passed. |
   | +1 | asflicense | -8 | The patch does not generate ASF License warnings. |
   | | | 3515 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 81023620d0ef 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d98c548 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/18/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/18/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/18/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: 
Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-528884183
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 77 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1283 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 27 | trunk passed |
   | +1 | mvnsite | 41 | trunk passed |
   | +1 | shadedclient | 853 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | trunk passed |
   | 0 | spotbugs | 73 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 70 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 40 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 28 new 
+ 25 unchanged - 0 fixed = 53 total (was 25) |
   | +1 | mvnsite | 38 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 845 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | the patch passed |
   | +1 | findbugs | 67 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 88 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 3692 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1d3126e9e67d 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d98c548 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/17/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/17/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/17/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
hadoop-yetus removed a comment on issue #1208: HADOOP-16423. S3Guard fsck: 
Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-517363865
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1083 | trunk passed |
   | +1 | compile | 37 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 39 | trunk passed |
   | +1 | shadedclient | 748 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 30 | trunk passed |
   | 0 | spotbugs | 63 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | -1 | findbugs | 61 | hadoop-tools/hadoop-aws in trunk has 1 extant 
findbugs warnings. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 35 | the patch passed |
   | +1 | compile | 31 | the patch passed |
   | +1 | javac | 31 | the patch passed |
   | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 80 new 
+ 17 unchanged - 0 fixed = 97 total (was 17) |
   | +1 | mvnsite | 35 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 791 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | -1 | findbugs | 70 | hadoop-tools/hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 284 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 3480 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Comparison of String objects using == or != in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardFsck.compareFileStatusToPathMetadata(S3AFileStatus,
 PathMetadata)   At S3GuardFsck.java:== or != in 
org.apache.hadoop.fs.s3a.s3guard.S3GuardFsck.compareFileStatusToPathMetadata(S3AFileStatus,
 PathMetadata)   At S3GuardFsck.java:[line 228] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f9874d8eef11 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a7371a7 |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/1/artifact/out/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/1/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/1/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/1/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/1/testReport/ |
   | Max. process+thread count | 444 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional 

[GitHub] [hadoop] hadoop-yetus commented on issue #1416: HDDS-2102. HddsVolumeChecker should use java optional in place of Guava optional. Contributed by Mukul Kumar Singh.

2019-09-09 Thread GitBox
hadoop-yetus commented on issue #1416: HDDS-2102. HddsVolumeChecker should use 
java optional in place of Guava optional. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1416#issuecomment-529666543
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 100 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 691 | trunk passed |
   | +1 | compile | 388 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 979 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 185 | trunk passed |
   | 0 | spotbugs | 453 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 688 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 555 | the patch passed |
   | +1 | compile | 409 | the patch passed |
   | +1 | javac | 409 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 725 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 183 | the patch passed |
   | +1 | findbugs | 768 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 313 | hadoop-hdds in the patch failed. |
   | -1 | unit | 289 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6642 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1416/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1416 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 626c82f6d9c2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 469165e |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1416/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1416/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1416/1/testReport/ |
   | Max. process+thread count | 1325 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1416/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322453901
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsck.java
 ##
 @@ -0,0 +1,395 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.S3AFileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.security.InvalidParameterException;
+import java.util.ArrayDeque;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Queue;
+import java.util.Set;
+
+import static java.util.stream.Collectors.toList;
+import static java.util.stream.Collectors.toSet;
+
+/**
+ * Main class for the FSCK factored out from S3GuardTool
+ * The implementation uses fixed DynamoDBMetadataStore as the backing store
+ * for metadata.
+ *
+ * Functions:
+ * 
+ *   Checking metadata consistency between S3 and metadatastore
+ * 
+ */
+public class S3GuardFsck {
+  private static final Logger LOG = LoggerFactory.getLogger(S3GuardFsck.class);
+  public static final String ROOT_PATH_STRING = "/";
+
+  private S3AFileSystem rawFS;
+  private DynamoDBMetadataStore metadataStore;
+
+  /**
+   * Creates an S3GuardFsck.
+   * @param fs the filesystem to compare to
+   * @param ms metadatastore the metadatastore to compare with (dynamo)
+   */
+  S3GuardFsck(S3AFileSystem fs, MetadataStore ms)
+  throws InvalidParameterException {
+this.rawFS = fs;
+
+if (ms == null) {
+  throw new InvalidParameterException("S3AFileSystem should be guarded by"
+  + " a " + DynamoDBMetadataStore.class.getCanonicalName());
+}
+this.metadataStore = (DynamoDBMetadataStore) ms;
+
+if (rawFS.hasMetadataStore()) {
+  throw new InvalidParameterException("Raw fs should not have a "
+  + "metadatastore.");
+}
+  }
+
+  /**
+   * Compares S3 to MS.
+   * Iterative breadth first walk on the S3 structure from a given root.
+   * Creates a list of pairs (metadata in S3 and in the MetadataStore) where
+   * the consistency or any rule is violated.
+   * Uses {@link S3GuardFsckViolationHandler} to handle violations.
+   * The violations are listed in Enums: {@link Violation}
+   *
+   * @param p the root path to start the traversal
+   * @throws IOException
+   * @return
+   */
+  public List compareS3RootToMs(Path p) throws IOException {
+final Path rootPath = rawFS.qualify(p);
+final S3AFileStatus root =
+(S3AFileStatus) rawFS.getFileStatus(rootPath);
+final List comparePairs = new ArrayList<>();
+final Queue queue = new ArrayDeque<>();
+queue.add(root);
+
+while (!queue.isEmpty()) {
+  // pop front node from the queue
+  final S3AFileStatus currentDir = queue.poll();
+
+  // Get a listing of that dir from s3 and add just the files.
+  // (Each directory will be added as a root.)
+  // Files should be casted to S3AFileStatus instead of plain FileStatus
+  // to get the VersionID and Etag.
+  final Path currentDirPath = currentDir.getPath();
+
+  final FileStatus[] s3DirListing = rawFS.listStatus(currentDirPath);
+  final List children =
+  Arrays.asList(s3DirListing).stream()
+  .filter(status -> !status.isDirectory())
+  .map(S3AFileStatus.class::cast).collect(toList());
+
+  // Compare the directory contents if the listing is authoritative
+  final DirListingMetadata msDirListing =
+  metadataStore.listChildren(currentDirPath);
+  if (msDirListing != null && msDirListing.isAuthoritative()) {
+final ComparePair cP =
+compareAuthDirListing(s3DirListing, msDirListing);
+if (cP.containsViolation()) {
+  comparePairs.add(cP);
+}
+  }
+
+  // Compare directory and contents, but not the listing
+  final List 

[GitHub] [hadoop] bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1208: HADOOP-16423. S3Guard 
fsck: Check metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#discussion_r322451755
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsck.java
 ##
 @@ -0,0 +1,395 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.s3guard;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.S3AFileStatus;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.security.InvalidParameterException;
+import java.util.ArrayDeque;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Queue;
+import java.util.Set;
+
+import static java.util.stream.Collectors.toList;
+import static java.util.stream.Collectors.toSet;
+
+/**
+ * Main class for the FSCK factored out from S3GuardTool
+ * The implementation uses fixed DynamoDBMetadataStore as the backing store
+ * for metadata.
+ *
+ * Functions:
+ * 
+ *   Checking metadata consistency between S3 and metadatastore
+ * 
+ */
+public class S3GuardFsck {
+  private static final Logger LOG = LoggerFactory.getLogger(S3GuardFsck.class);
+  public static final String ROOT_PATH_STRING = "/";
+
+  private S3AFileSystem rawFS;
+  private DynamoDBMetadataStore metadataStore;
+
+  /**
+   * Creates an S3GuardFsck.
+   * @param fs the filesystem to compare to
+   * @param ms metadatastore the metadatastore to compare with (dynamo)
+   */
+  S3GuardFsck(S3AFileSystem fs, MetadataStore ms)
+  throws InvalidParameterException {
+this.rawFS = fs;
+
+if (ms == null) {
+  throw new InvalidParameterException("S3AFileSystem should be guarded by"
+  + " a " + DynamoDBMetadataStore.class.getCanonicalName());
+}
+this.metadataStore = (DynamoDBMetadataStore) ms;
+
+if (rawFS.hasMetadataStore()) {
+  throw new InvalidParameterException("Raw fs should not have a "
+  + "metadatastore.");
+}
+  }
+
+  /**
+   * Compares S3 to MS.
+   * Iterative breadth first walk on the S3 structure from a given root.
+   * Creates a list of pairs (metadata in S3 and in the MetadataStore) where
+   * the consistency or any rule is violated.
+   * Uses {@link S3GuardFsckViolationHandler} to handle violations.
+   * The violations are listed in Enums: {@link Violation}
+   *
+   * @param p the root path to start the traversal
+   * @throws IOException
+   * @return
+   */
+  public List compareS3RootToMs(Path p) throws IOException {
+final Path rootPath = rawFS.qualify(p);
+final S3AFileStatus root =
+(S3AFileStatus) rawFS.getFileStatus(rootPath);
+final List comparePairs = new ArrayList<>();
+final Queue queue = new ArrayDeque<>();
+queue.add(root);
+
+while (!queue.isEmpty()) {
+  // pop front node from the queue
+  final S3AFileStatus currentDir = queue.poll();
+
+  // Get a listing of that dir from s3 and add just the files.
+  // (Each directory will be added as a root.)
+  // Files should be casted to S3AFileStatus instead of plain FileStatus
+  // to get the VersionID and Etag.
+  final Path currentDirPath = currentDir.getPath();
+
+  final FileStatus[] s3DirListing = rawFS.listStatus(currentDirPath);
+  final List children =
+  Arrays.asList(s3DirListing).stream()
+  .filter(status -> !status.isDirectory())
+  .map(S3AFileStatus.class::cast).collect(toList());
+
+  // Compare the directory contents if the listing is authoritative
+  final DirListingMetadata msDirListing =
+  metadataStore.listChildren(currentDirPath);
+  if (msDirListing != null && msDirListing.isAuthoritative()) {
+final ComparePair cP =
+compareAuthDirListing(s3DirListing, msDirListing);
+if (cP.containsViolation()) {
+  comparePairs.add(cP);
+}
+  }
+
+  // Compare directory and contents, but not the listing
+  final List 

[GitHub] [hadoop] avijayanhwx commented on issue #1411: HDDS-2098 : Ozone shell command prints out ERROR when the log4j file …

2019-09-09 Thread GitBox
avijayanhwx commented on issue #1411: HDDS-2098 : Ozone shell command prints 
out ERROR when the log4j file …
URL: https://github.com/apache/hadoop/pull/1411#issuecomment-529639718
 
 
   > I have a question
   > During ozone tarball build, we do copy ozone-shell-log4j.properties to 
etc/hadoop (like we copy log4.properties then why do we see this error or 
something need to be fixed in copying this script?
   > 
   > 
https://github.com/apache/hadoop/blob/trunk/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching#L95
   
   Yes, while starting ozone from snapshot tar ball, it works perfectly. 
However, when Ozone is deployed through a management product like Cloudera 
Manager, the log4j properties may not be individually configurable. We may have 
to rely on a default log4.properties. In that case, printing a 
FileNotFoundException for ozone shell commands is something we can avoid. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #568: HADOOP-15691 Add PathCapabilities to FS and FC to complement StreamCapabilities

2019-09-09 Thread GitBox
steveloughran commented on issue #568: HADOOP-15691 Add PathCapabilities to FS 
and FC to complement StreamCapabilities
URL: https://github.com/apache/hadoop/pull/568#issuecomment-529612976
 
 
   rebased to trunk to show this is still a live PR; not done the testing on 
the stores though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16530) Update xercesImpl in branch-2

2019-09-09 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925986#comment-16925986
 ] 

Jonathan Hung commented on HADOOP-16530:


Thanks for working on this [~iwasakims]/[~jojochuang], is this ready to be 
committed to branch-2?

> Update xercesImpl in branch-2
> -
>
> Key: HADOOP-16530
> URL: https://issues.apache.org/jira/browse/HADOOP-16530
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16530.branch-2.001.patch
>
>
> Hadoop 2 depends on xercesImpl 2.9.1, which is more than 10 years old. The 
> latest version is 2.12.0, released last year Let's update this dependency.
> HDFS-12221 removed xercesImpl in Hadoop 3. Looking at HDFS-12221, the impact 
> of this dependency is very minimal: only used by offlineimageviewer. 
> TestOfflineEditsViewer passed for me after the update. Not sure about the 
> impact of downstream applications though.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #568: HADOOP-15691 Add PathCapabilities to FS and FC to complement StreamCapabilities

2019-09-09 Thread GitBox
hadoop-yetus removed a comment on issue #568: HADOOP-15691 Add PathCapabilities 
to FS and FC to complement StreamCapabilities
URL: https://github.com/apache/hadoop/pull/568#issuecomment-470505705
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1110 | trunk passed |
   | +1 | compile | 959 | trunk passed |
   | +1 | checkstyle | 218 | trunk passed |
   | +1 | mvnsite | 290 | trunk passed |
   | +1 | shadedclient | 1274 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 393 | trunk passed |
   | +1 | javadoc | 214 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | -1 | mvninstall | 47 | hadoop-common in the patch failed. |
   | -1 | mvninstall | 32 | hadoop-hdfs-client in the patch failed. |
   | -1 | mvninstall | 17 | hadoop-hdfs-httpfs in the patch failed. |
   | -1 | mvninstall | 20 | hadoop-aws in the patch failed. |
   | -1 | mvninstall | 18 | hadoop-azure in the patch failed. |
   | -1 | mvninstall | 15 | hadoop-azure-datalake in the patch failed. |
   | -1 | compile | 64 | root in the patch failed. |
   | -1 | javac | 64 | root in the patch failed. |
   | -0 | checkstyle | 209 | root: The patch generated 8 new + 584 unchanged - 
0 fixed = 592 total (was 584) |
   | -1 | mvnsite | 43 | hadoop-common in the patch failed. |
   | -1 | mvnsite | 30 | hadoop-hdfs-client in the patch failed. |
   | -1 | mvnsite | 19 | hadoop-hdfs-httpfs in the patch failed. |
   | -1 | mvnsite | 22 | hadoop-aws in the patch failed. |
   | -1 | mvnsite | 19 | hadoop-azure in the patch failed. |
   | -1 | mvnsite | 18 | hadoop-azure-datalake in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedclient | 48 | patch has errors when building and testing our 
client artifacts. |
   | -1 | findbugs | 31 | hadoop-common in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdfs-client in the patch failed. |
   | -1 | findbugs | 17 | hadoop-hdfs-httpfs in the patch failed. |
   | -1 | findbugs | 20 | hadoop-aws in the patch failed. |
   | -1 | findbugs | 19 | hadoop-azure in the patch failed. |
   | -1 | findbugs | 16 | hadoop-azure-datalake in the patch failed. |
   | -1 | javadoc | 24 | hadoop-hdfs-project_hadoop-hdfs-client generated 3 new 
+ 0 unchanged - 0 fixed = 3 total (was 0) |
   | -1 | javadoc | 17 | hadoop-hdfs-project_hadoop-hdfs-httpfs generated 2 new 
+ 5 unchanged - 0 fixed = 7 total (was 5) |
   | -1 | javadoc | 20 | hadoop-tools_hadoop-aws generated 2 new + 1 unchanged 
- 0 fixed = 3 total (was 1) |
   | -1 | javadoc | 18 | hadoop-tools_hadoop-azure generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | -1 | javadoc | 15 | hadoop-tools_hadoop-azure-datalake generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 46 | hadoop-common in the patch failed. |
   | -1 | unit | 30 | hadoop-hdfs-client in the patch failed. |
   | -1 | unit | 17 | hadoop-hdfs-httpfs in the patch failed. |
   | -1 | unit | 21 | hadoop-aws in the patch failed. |
   | -1 | unit | 18 | hadoop-azure in the patch failed. |
   | -1 | unit | 17 | hadoop-azure-datalake in the patch failed. |
   | +1 | asflicense | 22 | The patch does not generate ASF License warnings. |
   | | | 5378 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/568 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 843a7a944d5f 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0eba407 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #568: HADOOP-15691 Add PathCapabilities to FS and FC to complement StreamCapabilities

2019-09-09 Thread GitBox
hadoop-yetus removed a comment on issue #568: HADOOP-15691 Add PathCapabilities 
to FS and FC to complement StreamCapabilities
URL: https://github.com/apache/hadoop/pull/568#issuecomment-470952525
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 46 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1352 | trunk passed |
   | +1 | compile | 1125 | trunk passed |
   | +1 | checkstyle | 216 | trunk passed |
   | +1 | mvnsite | 287 | trunk passed |
   | +1 | shadedclient | 1228 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 396 | trunk passed |
   | +1 | javadoc | 204 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 184 | the patch passed |
   | +1 | compile | 924 | the patch passed |
   | -1 | javac | 924 | root generated 2 new + 1483 unchanged - 0 fixed = 1485 
total (was 1483) |
   | -0 | checkstyle | 210 | root: The patch generated 9 new + 583 unchanged - 
0 fixed = 592 total (was 583) |
   | +1 | mvnsite | 286 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 448 | the patch passed |
   | +1 | javadoc | 211 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 510 | hadoop-common in the patch passed. |
   | +1 | unit | 115 | hadoop-hdfs-client in the patch passed. |
   | +1 | unit | 259 | hadoop-hdfs-httpfs in the patch passed. |
   | +1 | unit | 281 | hadoop-aws in the patch passed. |
   | +1 | unit | 90 | hadoop-azure in the patch passed. |
   | +1 | unit | 55 | hadoop-azure-datalake in the patch passed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 8988 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/568 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 74f8f948d9d7 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / de15a66 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/2/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/2/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/2/testReport/ |
   | Max. process+thread count | 1322 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs-httpfs 
hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure 
hadoop-tools/hadoop-azure-datalake U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-568/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1415: HDDS-2075. Tracing in OzoneManager call is propagated with wrong parent

2019-09-09 Thread GitBox
hadoop-yetus commented on issue #1415: HDDS-2075. Tracing in OzoneManager call 
is propagated with wrong parent
URL: https://github.com/apache/hadoop/pull/1415#issuecomment-529606399
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1333 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 45 | Maven dependency ordering for branch |
   | +1 | mvninstall | 647 | trunk passed |
   | +1 | compile | 391 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 947 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 479 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 702 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 562 | the patch passed |
   | +1 | compile | 374 | the patch passed |
   | +1 | javac | 374 | the patch passed |
   | +1 | checkstyle | 79 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | the patch passed |
   | +1 | findbugs | 653 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 315 | hadoop-hdds in the patch passed. |
   | -1 | unit | 236 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 7674 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1415/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1415 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fe5b73ebf793 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 147f986 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1415/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1415/1/testReport/ |
   | Max. process+thread count | 1298 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1415/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on issue #1410: HDDS-2076. Read fails because the block cannot be located in the container

2019-09-09 Thread GitBox
nandakumar131 commented on issue #1410: HDDS-2076. Read fails because the block 
cannot be located in the container
URL: https://github.com/apache/hadoop/pull/1410#issuecomment-529588126
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] chittshota commented on issue #1414: HDFS-14835. RBF: Secured Router should not run when it can't initialize DelegationTokenSecretManager

2019-09-09 Thread GitBox
chittshota commented on issue #1414: HDFS-14835. RBF: Secured Router should not 
run when it can't initialize DelegationTokenSecretManager
URL: https://github.com/apache/hadoop/pull/1414#issuecomment-529585656
 
 
   @tasanuma Thanks for reporting this. 
   @goiri  Thanks for tagging me.
   
   While working on this area earlier I was aware of this issue.
   I am not quite certain with this change. Tokens are just one way to 
authenticate against routers. Hence failing it to start when secret manager is 
broken may not always make sense. In instances where kerberos is sufficient, 
admins will be forced to set up secret managers.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 opened a new pull request #1416: HDDS-2102. HddsVolumeChecker should use java optional in place of Guava optional. Contributed by Mukul Kumar Singh.

2019-09-09 Thread GitBox
mukul1987 opened a new pull request #1416: HDDS-2102. HddsVolumeChecker should 
use java optional in place of Guava optional. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1416
 
 
   HddsVolumeChecker should use java optional in place of Guava optional as 
Guava Optional is marked as unstable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant closed pull request #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-09 Thread GitBox
bshashikant closed pull request #1364: HDDS-1843. Undetectable corruption after 
restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-09 Thread GitBox
bshashikant commented on issue #1364: HDDS-1843. Undetectable corruption after 
restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#issuecomment-529579125
 
 
   Thanks @nandakumar131  @mukul1987 @supratimdeka for the reviews. I have 
committed this change to trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1313: HDFS-13118. SnapshotDiffReport should provide the INode type.

2019-09-09 Thread GitBox
hadoop-yetus commented on issue #1313: HDFS-13118. SnapshotDiffReport should 
provide the INode type.
URL: https://github.com/apache/hadoop/pull/1313#issuecomment-529576153
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1080 | trunk passed |
   | +1 | compile | 191 | trunk passed |
   | +1 | checkstyle | 59 | trunk passed |
   | +1 | mvnsite | 117 | trunk passed |
   | +1 | shadedclient | 827 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 89 | trunk passed |
   | 0 | spotbugs | 164 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 288 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 109 | the patch passed |
   | +1 | compile | 180 | the patch passed |
   | +1 | cc | 180 | the patch passed |
   | +1 | javac | 180 | the patch passed |
   | +1 | checkstyle | 54 | hadoop-hdfs-project: The patch generated 0 new + 
384 unchanged - 6 fixed = 384 total (was 390) |
   | +1 | mvnsite | 111 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 694 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 82 | the patch passed |
   | +1 | findbugs | 296 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 116 | hadoop-hdfs-client in the patch passed. |
   | -1 | unit | 5374 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 9804 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1313 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 67f095d32585 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 60af879 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/12/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/12/testReport/ |
   | Max. process+thread count | 5340 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1313/12/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on issue #1414: HDFS-14835. RBF: Secured Router should not run when it can't initialize DelegationTokenSecretManager

2019-09-09 Thread GitBox
goiri commented on issue #1414: HDFS-14835. RBF: Secured Router should not run 
when it can't initialize DelegationTokenSecretManager
URL: https://github.com/apache/hadoop/pull/1414#issuecomment-529570835
 
 
   Thanks @tasanuma for the patch.
   I think it makes sense.
   @chittshota would you mind taking a look too?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-09 Thread GitBox
hadoop-yetus commented on issue #1407: HADOOP-16490. Improve S3Guard handling 
of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1407#issuecomment-529566998
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 156 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 91 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1543 | trunk passed |
   | +1 | compile | 1397 | trunk passed |
   | +1 | checkstyle | 199 | trunk passed |
   | +1 | mvnsite | 166 | trunk passed |
   | +1 | shadedclient | 1282 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 129 | trunk passed |
   | 0 | spotbugs | 85 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 241 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 108 | the patch passed |
   | +1 | compile | 1094 | the patch passed |
   | +1 | javac | 1094 | the patch passed |
   | +1 | checkstyle | 157 | the patch passed |
   | +1 | mvnsite | 139 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 746 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 123 | the patch passed |
   | +1 | findbugs | 221 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 647 | hadoop-common in the patch passed. |
   | +1 | unit | 101 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 60 | The patch does not generate ASF License warnings. |
   | | | 8608 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1407/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1407 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux d689c1fa26f2 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 60af879 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1407/2/testReport/ |
   | Max. process+thread count | 1810 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1407/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called.

2019-09-09 Thread GitBox
steveloughran commented on issue #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#issuecomment-529556865
 
 
   great!
   
   FWIW you made the 3.2 branch about 8h before the 3.2.1 branch was made, so 
this will be in that release. Please help test those RCs to make sure they do 
the right thing for you


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1415: HDDS-2075. Tracing in OzoneManager call is propagated with wrong parent

2019-09-09 Thread GitBox
adoroszlai commented on issue #1415: HDDS-2075. Tracing in OzoneManager call is 
propagated with wrong parent
URL: https://github.com/apache/hadoop/pull/1415#issuecomment-529555336
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai opened a new pull request #1415: HDDS-2075. Tracing in OzoneManager call is propagated with wrong parent

2019-09-09 Thread GitBox
adoroszlai opened a new pull request #1415: HDDS-2075. Tracing in OzoneManager 
call is propagated with wrong parent
URL: https://github.com/apache/hadoop/pull/1415
 
 
   ## What changes were proposed in this pull request?
   
   Apply tracing to `OzoneManagerProtocol` instead of `OzoneManagerProtocolPB`. 
 The latter only has a single public method, and no other `*ProtocolPB` 
interface is traced.
   
   https://issues.apache.org/jira/browse/HDDS-2075
   
   ## How was this patch tested?
   
   Verified operation hierarchy in Jaeger UI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16438) Introduce a config to control SSL Channel mode in Azure DataLake Store Gen1

2019-09-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16438:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Introduce a config to control SSL Channel mode in Azure DataLake Store Gen1
> ---
>
> Key: HADOOP-16438
> URL: https://issues.apache.org/jira/browse/HADOOP-16438
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/adl
>Affects Versions: 2.9.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16438.001.patch
>
>
> Currently there is no user control possible on the SSL channel mode used for 
> server connections. It will try to connect using SSLChannelMode.OpenSSL and 
> default to SSLChannelMode.Default_JSE when there is any issue. 
> A new config is needed to toggle the choice if any issues are observed with 
> OpenSSL. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1394: Hadoop-16438 ADLS Gen1 OpenSSL config control

2019-09-09 Thread GitBox
steveloughran commented on issue #1394: Hadoop-16438 ADLS Gen1 OpenSSL config 
control
URL: https://github.com/apache/hadoop/pull/1394#issuecomment-529553975
 
 
   +1, committed to trunk. If you want it in earlier versions just cherrypick, 
retest and provide a PR which merges. I'm assuming you will want to do this, so 
I'm leaving it open


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #1394: Hadoop-16438 ADLS Gen1 OpenSSL config control

2019-09-09 Thread GitBox
steveloughran closed pull request #1394: Hadoop-16438 ADLS Gen1 OpenSSL config 
control
URL: https://github.com/apache/hadoop/pull/1394
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16438) Introduce a config to control SSL Channel mode in Azure DataLake Store Gen1

2019-09-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925871#comment-16925871
 ] 

Hudson commented on HADOOP-16438:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17261 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17261/])
HADOOP-16438. ADLS Gen1 OpenSSL config control. (stevel: rev 
147f98629cfa799044d5a911221f365a03f9380c)
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) hadoop-tools/hadoop-azure-datalake/pom.xml
* (edit) 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlSdkConfiguration.java
* (edit) 
hadoop-tools/hadoop-azure-datalake/src/site/markdown/troubleshooting_adl.md
* (edit) 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlConfKeys.java


> Introduce a config to control SSL Channel mode in Azure DataLake Store Gen1
> ---
>
> Key: HADOOP-16438
> URL: https://issues.apache.org/jira/browse/HADOOP-16438
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/adl
>Affects Versions: 2.9.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Minor
> Attachments: HADOOP-16438.001.patch
>
>
> Currently there is no user control possible on the SSL channel mode used for 
> server connections. It will try to connect using SSLChannelMode.OpenSSL and 
> default to SSLChannelMode.Default_JSE when there is any issue. 
> A new config is needed to toggle the choice if any issues are observed with 
> OpenSSL. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1394: Hadoop-16438 ADLS Gen1 OpenSSL config control

2019-09-09 Thread GitBox
steveloughran commented on issue #1394: Hadoop-16438 ADLS Gen1 OpenSSL config 
control
URL: https://github.com/apache/hadoop/pull/1394#issuecomment-529550781
 
 
   changes in hadoop code are good.
   
   >> In "Default", does the user get told that they can't load openssl? As the 
(reverted) s3a one did this and it just added yet another noisy log message 
everywhere.
   
   > A log line is printed only if SSLContext with OpenSSL could not be created.
   
   This means that by default a warning is going to be printed unless openssl 
is happy. I'm not convinced that is actually useful to most people, and I'm 
going to make sure that when s3a adds open SSL it isn't going to bother logging 
at info. However: its in your SDK, so your choice.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1410: HDDS-2076. Read fails because the block cannot be located in the container

2019-09-09 Thread GitBox
hadoop-yetus commented on issue #1410: HDDS-2076. Read fails because the block 
cannot be located in the container
URL: https://github.com/apache/hadoop/pull/1410#issuecomment-529520412
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 581 | trunk passed |
   | +1 | compile | 383 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 881 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   | 0 | spotbugs | 418 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 612 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | +1 | mvninstall | 544 | the patch passed |
   | +1 | compile | 394 | the patch passed |
   | +1 | javac | 394 | the patch passed |
   | +1 | checkstyle | 88 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 713 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | the patch passed |
   | +1 | findbugs | 704 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 196 | hadoop-hdds in the patch failed. |
   | -1 | unit | 195 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6058 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.keyvalue.TestKeyValueContainer 
|
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1410 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cdb643d21b64 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 60af879 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/3/testReport/ |
   | Max. process+thread count | 1263 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1407: HADOOP-16490. Improve 
S3Guard handling of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1407#discussion_r322288627
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ARemoteFileChanged.java
 ##
 @@ -723,6 +729,86 @@ public void testRenameEventuallyConsistentDirectory() 
throws Throwable {
 fs.rename(sourcedir, destdir);
   }
 
+  /**
+   * Tests doing a rename() on a file which is eventually visible.
+   */
+  @Test
+  public void testRenameEventuallyVisibleFile() throws Throwable {
+requireS3Guard();
+AmazonS3 s3ClientSpy = spyOnFilesystem();
+Path basedir = path();
+Path sourcedir = new Path(basedir, "sourcedir");
+fs.mkdirs(sourcedir);
+Path destdir = new Path(basedir, "destdir");
+String inconsistent = "inconsistent";
 
 Review comment:
   you could add this as a constant to the whole test, you are using it more 
than once.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1407: HADOOP-16490. Improve 
S3Guard handling of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1407#discussion_r322292450
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardTtl.java
 ##
 @@ -356,4 +358,14 @@ public void testListingFilteredExpiredItems() throws 
Exception {
 }
   }
 
+  protected DirListingMetadata getDirListingMetadata(final MetadataStore ms,
 
 Review comment:
   I don't see how this change is connected to the original issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1407: HADOOP-16490. Improve 
S3Guard handling of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1407#discussion_r32712
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -2587,6 +2594,30 @@ S3AFileStatus innerGetFileStatus(final Path f,
 entryPoint(INVOCATION_GET_FILE_STATUS);
 checkNotClosed();
 final Path path = qualify(f);
+return resolveFileStatus(path, needEmptyDirectoryFlag, false);
+  }
+
+
+  /**
+   * Get the status of a file or directory, first through S3Guard and then
+   * through S3.
+   * The S3 probes can leave 404 responses in the S3 load balancers; if
+   * a check is only needed for a directory, declaring this saves time and
+   * avoids creating one for the object.
+   * When only probing for directories, if an entry for a file is found in
+   * S3Guard it is returned, but checks for updated values are skipped.
+   * @param path fully qualified path
+   * @param needEmptyDirectoryFlag if true, implementation will calculate
+   *a TRUE or FALSE value for {@link S3AFileStatus#isEmptyDirectory()}
+   * @param onlyProbeForDirectory only perform the directory probes.
+   * @return a S3AFileStatus object
+   * @throws FileNotFoundException when the path does not exist
+   * @throws IOException on other problems.
+   */
+  private S3AFileStatus resolveFileStatus(final Path path,
 
 Review comment:
   I don't like this: why do we need to create another wrapper for this call? I 
mean `getFileStatus` calls `innerGetFileStatus` calls `resolveFileStatus` and I 
don't see why do we need to do the last call here - imho there's no need for 
another method in the same class.. It will be just another command+click for 
most of us in the IDE, while I don't see any particular gain from this - a 
better way would be the factor out the method call to it's own class, or create 
a jira for this at least.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-09 Thread GitBox
bgaborg commented on a change in pull request #1407: HADOOP-16490. Improve 
S3Guard handling of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1407#discussion_r322217284
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -761,4 +761,29 @@ private Constants() {
* Default change detection require version: true.
*/
   public static final boolean CHANGE_DETECT_REQUIRE_VERSION_DEFAULT = true;
+
+  /**
+   * Number of times to retry any repeatable S3 client request on failure,
+   * excluding throttling requests: {@value}.
+   */
+  public static final String S3GUARD_CONSISTENCY_RETRY_LIMIT =
+  "fs.s3a.s3guard.consistency.retry.limit";
+
+  /**
+   * Default retry limit: {@value}.
+   */
+  public static final int S3GUARD_CONSISTENCY_RETRY_LIMIT_DEFAULT = 7;
+
+  /**
+   * Initial retry interval: {@value}.
+   */
+  public static final String S3GUARD_CONSISTENCY_RETRY_INTERVAL =
+  "fs.s3a.s3guard.consistency.retry.interval";
+
+  /**
+   * Default initial retry interval: {@value}.
 
 Review comment:
   Maybe (really, just maybe, so you don't have to if you don't think that's 
needed) add as a comment that `An exponential back-off is used here: every 
failure doubles the delay.`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-09 Thread GitBox
steveloughran commented on issue #1407: HADOOP-16490. Improve S3Guard handling 
of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1407#issuecomment-529495592
 
 
   just pushed up a patch with more detailed asserts and some better tuning of 
staging dirs


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1414: HDFS-14835. RBF: Secured Router should not run when it can't initialize DelegationTokenSecretManager

2019-09-09 Thread GitBox
hadoop-yetus commented on issue #1414: HDFS-14835. RBF: Secured Router should 
not run when it can't initialize DelegationTokenSecretManager
URL: https://github.com/apache/hadoop/pull/1414#issuecomment-529471677
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1033 | trunk passed |
   | +1 | compile | 33 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 37 | trunk passed |
   | +1 | shadedclient | 751 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 40 | trunk passed |
   | 0 | spotbugs | 66 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 65 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 29 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | +1 | checkstyle | 18 | the patch passed |
   | +1 | mvnsite | 30 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 754 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 38 | the patch passed |
   | +1 | findbugs | 66 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 1421 | hadoop-hdfs-rbf in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 4560 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
   |   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
   |   | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1414/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1414 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6b8b7431aac9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 60af879 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1414/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1414/1/testReport/ |
   | Max. process+thread count | 1604 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1414/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-09 Thread GitBox
steveloughran commented on issue #1407: HADOOP-16490. Improve S3Guard handling 
of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1407#issuecomment-529465938
 
 
   FWIW, in the HADOOP-16207 code there was a bug in a recent iteration where 
each task was committing the staging file .pendingset files[ to to the temp dir 
system property of its task, which was being set to a different path on each 
NM; only work written by tasks running on the same NM as job commit were found. 
Wonder if this is similar


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1407: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-09 Thread GitBox
steveloughran commented on issue #1407: HADOOP-16490. Improve S3Guard handling 
of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1407#issuecomment-529461418
 
 
   OK. I will have a look at the commit MR jobs.
   
   
   you know all the output goes into target/failsafe right? it doesn;t matter 
if the console fills up as we get everything there, inc logs
   
   If I'm not seeing this and you are, you'll be able to set the s3a logs to 
debug and rerun so i ge5t the details.
   
   FWIW, all the DT/session failures imply its the location of the STS service 
which isn't being found
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1037: HADOOP-15847 limit the r/w capacity

2019-09-09 Thread GitBox
steveloughran commented on issue #1037: HADOOP-15847 limit the r/w capacity 
URL: https://github.com/apache/hadoop/pull/1037#issuecomment-529452031
 
 
   IF that wasn't the right addr, then add it as an email address to your 
gibhub a/c and you'll get the credit


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #1037: HADOOP-15847 limit the r/w capacity

2019-09-09 Thread GitBox
steveloughran closed pull request #1037: HADOOP-15847 limit the r/w capacity 
URL: https://github.com/apache/hadoop/pull/1037
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1037: HADOOP-15847 limit the r/w capacity

2019-09-09 Thread GitBox
steveloughran commented on issue #1037: HADOOP-15847 limit the r/w capacity 
URL: https://github.com/apache/hadoop/pull/1037#issuecomment-529451682
 
 
   the patch is merged in; closing


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16438) Introduce a config to control SSL Channel mode in Azure DataLake Store Gen1

2019-09-09 Thread Sneha Vijayarajan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925627#comment-16925627
 ] 

Sneha Vijayarajan commented on HADOOP-16438:


[~ste...@apache.org] - Kindly request you to spare some time for this PR. 

> Introduce a config to control SSL Channel mode in Azure DataLake Store Gen1
> ---
>
> Key: HADOOP-16438
> URL: https://issues.apache.org/jira/browse/HADOOP-16438
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/adl
>Affects Versions: 2.9.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Minor
> Attachments: HADOOP-16438.001.patch
>
>
> Currently there is no user control possible on the SSL channel mode used for 
> server connections. It will try to connect using SSLChannelMode.OpenSSL and 
> default to SSLChannelMode.Default_JSE when there is any issue. 
> A new config is needed to toggle the choice if any issues are observed with 
> OpenSSL. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16549) Remove Unsupported SSL/TLS Versions from Docs/Properties

2019-09-09 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925609#comment-16925609
 ] 

Wei-Chiu Chuang commented on HADOOP-16549:
--

Also note HADOOP-16152 intends to upgrade to Jetty 9.4, and since 9.4.12 it 
supports TLS 1.3. So HADOOP-16152 should also add TLS1.3 into the enabled 
protocol.

> Remove Unsupported SSL/TLS Versions from Docs/Properties
> 
>
> Key: HADOOP-16549
> URL: https://issues.apache.org/jira/browse/HADOOP-16549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HADOOP-16549.001.patch
>
>
> We should remove the following unsupported versions from docs and 
> core-default.xml appropriately.
> TLS v1.0
> TLS v1.1
> SSL v3
> SSLv2Hello
> ref: 
> https://www.eclipse.org/jetty/documentation/9.3.27.v20190418/configuring-ssl.html
> https://github.com/eclipse/jetty.project/issues/866
> [~aajisaka], I happened to find you left TLSv1.1 in 
> https://issues.apache.org/jira/browse/HADOOP-16000. Should we still keep it?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16549) Remove Unsupported SSL/TLS Versions from Docs/Properties

2019-09-09 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925604#comment-16925604
 ] 

Wei-Chiu Chuang commented on HADOOP-16549:
--

LGTM thanks Daisuke

> Remove Unsupported SSL/TLS Versions from Docs/Properties
> 
>
> Key: HADOOP-16549
> URL: https://issues.apache.org/jira/browse/HADOOP-16549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HADOOP-16549.001.patch
>
>
> We should remove the following unsupported versions from docs and 
> core-default.xml appropriately.
> TLS v1.0
> TLS v1.1
> SSL v3
> SSLv2Hello
> ref: 
> https://www.eclipse.org/jetty/documentation/9.3.27.v20190418/configuring-ssl.html
> https://github.com/eclipse/jetty.project/issues/866
> [~aajisaka], I happened to find you left TLSv1.1 in 
> https://issues.apache.org/jira/browse/HADOOP-16000. Should we still keep it?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma opened a new pull request #1414: HDFS-14835. RBF: Secured Router should not run when it can't initialize DelegationTokenSecretManager

2019-09-09 Thread GitBox
tasanuma opened a new pull request #1414: HDFS-14835. RBF: Secured Router 
should not run when it can't initialize DelegationTokenSecretManager
URL: https://github.com/apache/hadoop/pull/1414
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1410: HDDS-2076. Read fails because the block cannot be located in the container

2019-09-09 Thread GitBox
nandakumar131 commented on a change in pull request #1410: HDDS-2076. Read 
fails because the block cannot be located in the container
URL: https://github.com/apache/hadoop/pull/1410#discussion_r322181017
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/TarContainerPacker.java
 ##
 @@ -58,6 +59,8 @@
 
   private static final String CONTAINER_FILE_NAME = "container.yaml";
 
+  private static final String CONTAINER_BCSID = "BCSID";
 
 Review comment:
   We don't need this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1410: HDDS-2076. Read fails because the block cannot be located in the container

2019-09-09 Thread GitBox
nandakumar131 commented on a change in pull request #1410: HDDS-2076. Read 
fails because the block cannot be located in the container
URL: https://github.com/apache/hadoop/pull/1410#discussion_r322181109
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/TarContainerPacker.java
 ##
 @@ -30,6 +30,7 @@
 import java.nio.file.Paths;
 import java.util.stream.Collectors;
 
+import com.google.common.primitives.Longs;
 
 Review comment:
   Not needed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] supratimdeka commented on issue #1377: HDDS-2057. Incorrect Default OM Port in Ozone FS URI Error Message. Contributed by Supratim Deka

2019-09-09 Thread GitBox
supratimdeka commented on issue #1377: HDDS-2057. Incorrect Default OM Port in 
Ozone FS URI Error Message. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/1377#issuecomment-529412530
 
 
   @bharatviswa504 , the acceptance failure does not appear related to the 
patch.
   smoketests.hadoop27-hadoopo3fs.Test hadoop dfs fails with:
   
   Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/hadoop/hdds/conf/OzoneConfiguration
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2204)
at 
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2169)
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2265)
at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2652)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2665)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2701)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2683)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:372)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16551) The changelog*.md seems not generated when create-release

2019-09-09 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925544#comment-16925544
 ] 

Akira Ajisaka commented on HADOOP-16551:


Reverted HADOOP-16061 from branch-3.1 and branch-3.1.3 and I think this issue 
is fixed. Hi [~tangzhankun], would you run the create-release script again?

> The changelog*.md seems not generated when create-release
> -
>
> Key: HADOOP-16551
> URL: https://issues.apache.org/jira/browse/HADOOP-16551
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Zhankun Tang
>Priority: Blocker
>
> Hi,
>  When creating Hadoop 3.1.3 release with "create-release" script, after the 
> mvn site succeeded. But it complains about this and failed:
> {code:java}
> dev-support/bin/create-release --asfrelease --docker --dockercache{code}
> {code:java}
> $ cd /build/source
> $ mv /build/source/target/hadoop-site-3.1.3.tar.gz 
> /build/source/target/artifacts/hadoop-3.1.3-site.tar.gz
> $ cp -p 
> /build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md
>  /build/source/target/artifacts/CHANGES.md
> cp: cannot stat 
> '/build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md':
>  No such file or directory
> {code}
> And there's no 3.1.3 release site markdown folder.
> {code:java}
> [ztang@release-vm hadoop]$ ls 
> hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3
> ls: cannot access 
> hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3: No such 
> file or directory
> {code}
> I've checked the HADOOP-14671 but have no idea why this changelog is missing.
> *Update:*
>  Found that the CHANGELOG.md and RELEASENOTES.md are generated but not in 
> directory "3.1.3"
> {code:java}
> [ztang@release-vm hadoop]$ ls 
> hadoop-common-project/hadoop-common/src/site/markdown/release/
> 0.1.0 0.15.2 0.19.2 0.23.2 0.7.2 2.0.1-alpha 2.6.3 3.0.0-alpha3
> 0.10.0 0.15.3 0.2.0 0.23.3 0.8.0 2.0.2-alpha 2.6.4 3.0.0-alpha4
> 0.10.1 0.15.4 0.20.0 0.23.4 0.9.0 2.0.3-alpha 2.6.5 3.0.0-beta1
> 0.1.1 0.16.0 0.20.1 0.23.5 0.9.1 2.0.4-alpha 2.6.6 3.0.1
> 0.11.0 0.16.1 0.20.2 0.23.6 0.9.2 2.0.5-alpha 2.7.0 3.0.3
> 0.11.1 0.16.2 0.20.203.0 0.23.7 1.0.0 2.0.6-alpha 2.7.1 3.1.0
> 0.11.2 0.16.3 0.20.203.1 0.23.8 1.0.1 2.1.0-beta 2.7.2 3.1.1
> 0.12.0 0.16.4 0.20.204.0 0.23.9 1.0.2 2.1.1-beta 2.7.3 3.1.2
> 0.12.1 0.17.0 0.20.205.0 0.24.0 1.0.3 2.2.0 2.7.4 CHANGELOG.md
> 0.12.2 0.17.1 0.20.3 0.3.0 1.0.4 2.2.1 2.7.5 index.md
> 0.12.3 0.17.2 0.2.1 0.3.1 1.1.0 2.3.0 2.8.0 README.md
> 0.13.0 0.17.3 0.21.0 0.3.2 1.1.1 2.4.0 2.8.1 RELEASENOTES.md
> 0.14.0 0.18.0 0.21.1 0.4.0 1.1.2 2.4.1 2.8.2
> 0.14.1 0.18.1 0.22.0 0.5.0 1.1.3 2.5.0 2.8.3
> 0.14.2 0.18.2 0.22.1 0.6.0 1.2.0 2.5.1 2.9.0
> 0.14.3 0.18.3 0.23.0 0.6.1 1.2.1 2.5.2 2.9.1
> 0.14.4 0.18.4 0.23.1 0.6.2 1.2.2 2.6.0 3.0.0
> 0.15.0 0.19.0 0.23.10 0.7.0 1.3.0 2.6.1 3.0.0-alpha1
> 0.15.1 0.19.1 0.23.11 0.7.1 2.0.0-alpha 2.6.2 3.0.0-alpha2{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16061) Update Apache Yetus to 0.10.0

2019-09-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16061:
---
Fix Version/s: (was: 3.1.3)

Reverted from branch-3.1 and branch-3.1.3.

> Update Apache Yetus to 0.10.0
> -
>
> Key: HADOOP-16061
> URL: https://issues.apache.org/jira/browse/HADOOP-16061
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
>
> Yetus 0.10.0 is out. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16061) Update Apache Yetus to 0.10.0

2019-09-09 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925537#comment-16925537
 ] 

Akira Ajisaka commented on HADOOP-16061:


Yetus 0.8.0 and upper changed the change log and release note file names and 
caused HADOOP-16551 in branch-3.1.
I'll revert this from branch-3.1.

> Update Apache Yetus to 0.10.0
> -
>
> Key: HADOOP-16061
> URL: https://issues.apache.org/jira/browse/HADOOP-16061
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
>
> Yetus 0.10.0 is out. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16551) The changelog*.md seems not generated when create-release

2019-09-09 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925534#comment-16925534
 ] 

Akira Ajisaka commented on HADOOP-16551:


Sorry, https://issues.apache.org/jira/browse/HADOOP-16061 broke this in 
branch-3.1.

> The changelog*.md seems not generated when create-release
> -
>
> Key: HADOOP-16551
> URL: https://issues.apache.org/jira/browse/HADOOP-16551
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Zhankun Tang
>Priority: Blocker
>
> Hi,
>  When creating Hadoop 3.1.3 release with "create-release" script, after the 
> mvn site succeeded. But it complains about this and failed:
> {code:java}
> dev-support/bin/create-release --asfrelease --docker --dockercache{code}
> {code:java}
> $ cd /build/source
> $ mv /build/source/target/hadoop-site-3.1.3.tar.gz 
> /build/source/target/artifacts/hadoop-3.1.3-site.tar.gz
> $ cp -p 
> /build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md
>  /build/source/target/artifacts/CHANGES.md
> cp: cannot stat 
> '/build/source/hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3/CHANGES*.md':
>  No such file or directory
> {code}
> And there's no 3.1.3 release site markdown folder.
> {code:java}
> [ztang@release-vm hadoop]$ ls 
> hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3
> ls: cannot access 
> hadoop-common-project/hadoop-common/src/site/markdown/release/3.1.3: No such 
> file or directory
> {code}
> I've checked the HADOOP-14671 but have no idea why this changelog is missing.
> *Update:*
>  Found that the CHANGELOG.md and RELEASENOTES.md are generated but not in 
> directory "3.1.3"
> {code:java}
> [ztang@release-vm hadoop]$ ls 
> hadoop-common-project/hadoop-common/src/site/markdown/release/
> 0.1.0 0.15.2 0.19.2 0.23.2 0.7.2 2.0.1-alpha 2.6.3 3.0.0-alpha3
> 0.10.0 0.15.3 0.2.0 0.23.3 0.8.0 2.0.2-alpha 2.6.4 3.0.0-alpha4
> 0.10.1 0.15.4 0.20.0 0.23.4 0.9.0 2.0.3-alpha 2.6.5 3.0.0-beta1
> 0.1.1 0.16.0 0.20.1 0.23.5 0.9.1 2.0.4-alpha 2.6.6 3.0.1
> 0.11.0 0.16.1 0.20.2 0.23.6 0.9.2 2.0.5-alpha 2.7.0 3.0.3
> 0.11.1 0.16.2 0.20.203.0 0.23.7 1.0.0 2.0.6-alpha 2.7.1 3.1.0
> 0.11.2 0.16.3 0.20.203.1 0.23.8 1.0.1 2.1.0-beta 2.7.2 3.1.1
> 0.12.0 0.16.4 0.20.204.0 0.23.9 1.0.2 2.1.1-beta 2.7.3 3.1.2
> 0.12.1 0.17.0 0.20.205.0 0.24.0 1.0.3 2.2.0 2.7.4 CHANGELOG.md
> 0.12.2 0.17.1 0.20.3 0.3.0 1.0.4 2.2.1 2.7.5 index.md
> 0.12.3 0.17.2 0.2.1 0.3.1 1.1.0 2.3.0 2.8.0 README.md
> 0.13.0 0.17.3 0.21.0 0.3.2 1.1.1 2.4.0 2.8.1 RELEASENOTES.md
> 0.14.0 0.18.0 0.21.1 0.4.0 1.1.2 2.4.1 2.8.2
> 0.14.1 0.18.1 0.22.0 0.5.0 1.1.3 2.5.0 2.8.3
> 0.14.2 0.18.2 0.22.1 0.6.0 1.2.0 2.5.1 2.9.0
> 0.14.3 0.18.3 0.23.0 0.6.1 1.2.1 2.5.2 2.9.1
> 0.14.4 0.18.4 0.23.1 0.6.2 1.2.2 2.6.0 3.0.0
> 0.15.0 0.19.0 0.23.10 0.7.0 1.3.0 2.6.1 3.0.0-alpha1
> 0.15.1 0.19.1 0.23.11 0.7.1 2.0.0-alpha 2.6.2 3.0.0-alpha2{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1410: HDDS-2076. Read fails because the block cannot be located in the container

2019-09-09 Thread GitBox
hadoop-yetus commented on a change in pull request #1410: HDDS-2076. Read fails 
because the block cannot be located in the container
URL: https://github.com/apache/hadoop/pull/1410#discussion_r322139051
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/TarContainerPacker.java
 ##
 @@ -234,7 +237,7 @@ private void includePath(String containerPath, String 
subdir,
   archiveOutputStream);
 }
   }
-
+  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1410: HDDS-2076. Read fails because the block cannot be located in the container

2019-09-09 Thread GitBox
hadoop-yetus commented on issue #1410: HDDS-2076. Read fails because the block 
cannot be located in the container
URL: https://github.com/apache/hadoop/pull/1410#issuecomment-529376083
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 75 | Maven dependency ordering for branch |
   | +1 | mvninstall | 630 | trunk passed |
   | +1 | compile | 372 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 938 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   | 0 | spotbugs | 481 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 711 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 37 | Maven dependency ordering for patch |
   | +1 | mvninstall | 651 | the patch passed |
   | +1 | compile | 426 | the patch passed |
   | +1 | javac | 426 | the patch passed |
   | -0 | checkstyle | 38 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 811 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 198 | the patch passed |
   | +1 | findbugs | 729 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 203 | hadoop-hdds in the patch failed. |
   | -1 | unit | 227 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6647 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.container.keyvalue.TestKeyValueContainer |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1410 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b9279037373a 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3b9584d |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/2/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/2/testReport/ |
   | Max. process+thread count | 1274 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1410/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16503) [JDK11] TestLeafQueue tests are failing due to WrongTypeOfReturnValue

2019-09-09 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925464#comment-16925464
 ] 

Adam Antal commented on HADOOP-16503:
-

YARN-9784 has been committed (thanks to [~kmarton]). 

[~kmarton] could you update the issue whether the fix in JDK8 does indeed 
resolve the failure on JDK11 as well? Or do we need an additional patch to fix 
it in JDK11?

> [JDK11] TestLeafQueue tests are failing due to WrongTypeOfReturnValue
> -
>
> Key: HADOOP-16503
> URL: https://issues.apache.org/jira/browse/HADOOP-16503
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Julia Kinga Marton
>Priority: Major
> Attachments: HADOOP-16503.001.patch
>
>
> Many of the tests in 
> {{org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue}}
>  fails with the following error message running on JDK11:
> {noformat}
> [ERROR] 
> testSingleQueueWithOneUser(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue)
>   Time elapsed: 0.204 s  <<< ERROR!
> org.mockito.exceptions.misusing.WrongTypeOfReturnValue:
> YarnConfiguration cannot be returned by getRMNodes()
> getRMNodes() should return ConcurrentMap
> ***
> If you're unsure why you're getting above error read on.
> Due to the nature of the syntax above problem might occur because:
> 1. This exception *might* occur in wrongly written multi-threaded tests.
>Please refer to Mockito FAQ on limitations of concurrency testing.
> 2. A spy is stubbed using when(spy.foo()).then() syntax. It is safer to stub 
> spies -
>- with doReturn|Throw() family of methods. More in javadocs for 
> Mockito.spy() method.
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue.setUpInternal(TestLeafQueue.java:221)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue.setUp(TestLeafQueue.java:144)
>...
> {noformat}
> This is due to the actual execution of the call, while we need to record only 
> the invocation of it. According to the javadocs and other folks.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1375: HDDS-2048: State check during container state transition in datanode should be lock protected

2019-09-09 Thread GitBox
hadoop-yetus commented on issue #1375: HDDS-2048: State check during container 
state transition in datanode should be lock protected
URL: https://github.com/apache/hadoop/pull/1375#issuecomment-529347772
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 948 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 600 | trunk passed |
   | +1 | compile | 380 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 884 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   | 0 | spotbugs | 445 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 641 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 533 | the patch passed |
   | +1 | compile | 389 | the patch passed |
   | +1 | javac | 389 | the patch passed |
   | +1 | checkstyle | 90 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 684 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | +1 | findbugs | 628 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 295 | hadoop-hdds in the patch passed. |
   | -1 | unit | 181 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 6928 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1375 |
   | JIRA Issue | HDDS-2048 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bfea94d0b409 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3b9584d |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/5/testReport/ |
   | Max. process+thread count | 1197 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-09 Thread GitBox
smengcl commented on a change in pull request #1360: HDDS-2007. Make ozone fs 
shell command work with OM HA service ids 
URL: https://github.com/apache/hadoop/pull/1360#discussion_r322092242
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -131,6 +142,13 @@ public void initialize(URI name, Configuration conf) 
throws IOException {
 // If port number is not specified, read it from config
 omPort = OmUtils.getOmRpcPort(conf);
   }
+} else if (OmUtils.isServiceIdsDefined(conf)) {
+  // When host name or service id is given, and ozone.om.service.ids is
 
 Review comment:
   Correct. This is the case for command `ozone fs -ls o3fs://bucket.volume/` 
when HA is enabled.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on a change in pull request #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-09 Thread GitBox
smengcl commented on a change in pull request #1360: HDDS-2007. Make ozone fs 
shell command work with OM HA service ids 
URL: https://github.com/apache/hadoop/pull/1360#discussion_r322091772
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
 ##
 @@ -131,6 +142,13 @@ public void initialize(URI name, Configuration conf) 
throws IOException {
 // If port number is not specified, read it from config
 omPort = OmUtils.getOmRpcPort(conf);
   }
+} else if (OmUtils.isServiceIdsDefined(conf)) {
 
 Review comment:
   I wonder when would `conf` not be an `OzoneConfiguration`? I tested that in 
all my test cases, `conf` is always `OzoneConfiguration`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on issue #1375: HDDS-2048: State check during container state transition in datanode should be lock protected

2019-09-09 Thread GitBox
lokeshj1703 commented on issue #1375: HDDS-2048: State check during container 
state transition in datanode should be lock protected
URL: https://github.com/apache/hadoop/pull/1375#issuecomment-529314982
 
 
   @nandakumar131 Thanks for reviewing the changes! 3rd commit addresses your 
comments. There was an unlock missing for container read lock taken in 
BlockManagerImpl class. I have fixed it in the commit as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org