[GitHub] [hadoop] bilaharith opened a new pull request #1403: HADOOP-16548 Made flush operation configurable in ABFS

2019-09-04 Thread GitBox
bilaharith opened a new pull request #1403: HADOOP-16548 Made flush operation 
configurable in ABFS
URL: https://github.com/apache/hadoop/pull/1403
 
 
   Made flush operation configurable in ABFS driver for performance 
improvements.
   
   Driver test results using a Namespace enabled account in Central India:
   
   fs.azure.enable.abfs.flush = false
   
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   Tests run: 42, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 394, Failures: 1, Errors: 1, Skipped: 21
   Tests run: 190, Failures: 0, Errors: 0, Skipped: 15
   
   
   fs.azure.enable.abfs.flush = true
   
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   Tests run: 42, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 394, Failures: 0, Errors: 1, Skipped: 21
   Tests run: 190, Failures: 0, Errors: 0, Skipped: 15


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16542) Update commons-beanutils version

2019-09-04 Thread kevin su (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16923065#comment-16923065
 ] 

kevin su commented on HADOOP-16542:
---

Thanks [~jojochuang] for the help, upload patch v3 to trigger pre-commit Jenkins

> Update commons-beanutils version
> 
>
> Key: HADOOP-16542
> URL: https://issues.apache.org/jira/browse/HADOOP-16542
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.10.0, 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: kevin su
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16542.001.patch, HADOOP-16542.002.patch, 
> HADOOP-16542.003.patch
>
>
> [http://mail-archives.apache.org/mod_mbox/www-announce/201908.mbox/%3cc628798f-315d-4428-8cb1-4ed1ecc95...@apache.org%3e]
>  {quote}
> CVE-2019-10086. Apache Commons Beanutils does not suppresses the class 
> property in PropertyUtilsBean
> by default.
> Severity: Medium
> Vendor: The Apache Software Foundation
> Versions Affected: commons-beanutils-1.9.3 and earlier
> Description: A special BeanIntrospector class was added in version 1.9.2.
> This can be used to stop attackers from using the class property of
> Java objects to get access to the classloader.
> However this protection was not enabled by default.
> PropertyUtilsBean (and consequently BeanUtilsBean) now disallows class
> level property access by default, thus protecting against
> CVE-2014-0114.
> Mitigation: 1.X users should migrate to 1.9.4.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16542) Update commons-beanutils version

2019-09-04 Thread kevin su (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HADOOP-16542:
--
Attachment: HADOOP-16542.003.patch

> Update commons-beanutils version
> 
>
> Key: HADOOP-16542
> URL: https://issues.apache.org/jira/browse/HADOOP-16542
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.10.0, 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: kevin su
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16542.001.patch, HADOOP-16542.002.patch, 
> HADOOP-16542.003.patch
>
>
> [http://mail-archives.apache.org/mod_mbox/www-announce/201908.mbox/%3cc628798f-315d-4428-8cb1-4ed1ecc95...@apache.org%3e]
>  {quote}
> CVE-2019-10086. Apache Commons Beanutils does not suppresses the class 
> property in PropertyUtilsBean
> by default.
> Severity: Medium
> Vendor: The Apache Software Foundation
> Versions Affected: commons-beanutils-1.9.3 and earlier
> Description: A special BeanIntrospector class was added in version 1.9.2.
> This can be used to stop attackers from using the class property of
> Java objects to get access to the classloader.
> However this protection was not enabled by default.
> PropertyUtilsBean (and consequently BeanUtilsBean) now disallows class
> level property access by default, thus protecting against
> CVE-2014-0114.
> Mitigation: 1.X users should migrate to 1.9.4.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on issue #1326: HDDS-1898. GrpcReplicationService#download cannot replicate the container.

2019-09-04 Thread GitBox
lokeshj1703 commented on issue #1326: HDDS-1898. 
GrpcReplicationService#download cannot replicate the container.
URL: https://github.com/apache/hadoop/pull/1326#issuecomment-528203986
 
 
   @nandakumar131 Thanks for working on the PR! The changes look good to me. +1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi merged pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-04 Thread GitBox
ChenSammi merged pull request #1366: HDDS-1577. Add default pipeline placement 
policy implementation.
URL: https://github.com/apache/hadoop/pull/1366
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on issue #1366: HDDS-1577. Add default pipeline placement policy implementation.

2019-09-04 Thread GitBox
ChenSammi commented on issue #1366: HDDS-1577. Add default pipeline placement 
policy implementation.
URL: https://github.com/apache/hadoop/pull/1366#issuecomment-528184986
 
 
   +1.  Will commit soon. Thanks Timmy for the contribution. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #1154: [HDDS-1200] Add support for checksum verification in data scrubber

2019-09-04 Thread GitBox
anuengineer closed pull request #1154: [HDDS-1200] Add support for checksum 
verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1154: [HDDS-1200] Add support for checksum verification in data scrubber

2019-09-04 Thread GitBox
anuengineer commented on issue #1154: [HDDS-1200] Add support for checksum 
verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#issuecomment-528183034
 
 
   @hgadre  Thanks for the contribution. All others thanks for the review. I 
have committed this patch to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16531) Log more detail for slow RPC

2019-09-04 Thread Chen Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922999#comment-16922999
 ] 

Chen Zhang commented on HADOOP-16531:
-

[~ayushtkn] [~xkrogen], do you have time to help review this patch? Thanks

> Log more detail for slow RPC
> 
>
> Key: HADOOP-16531
> URL: https://issues.apache.org/jira/browse/HADOOP-16531
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HADOOP-16531.001.patch
>
>
> Current implementation only log process time
> {code:java}
> if ((rpcMetrics.getProcessingSampleCount() > minSampleSize) &&
> (processingTime > threeSigma)) {
>   LOG.warn("Slow RPC : {} took {} {} to process from client {}",
>   methodName, processingTime, RpcMetrics.TIMEUNIT, call);
>   rpcMetrics.incrSlowRpc();
> }
> {code}
> We need to log more details to help us locate the problem (eg. how long it 
> take to request lock, holding lock, or do other things)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16545) Update the release year to 2019

2019-09-04 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922990#comment-16922990
 ] 

Zhankun Tang commented on HADOOP-16545:
---

[~ayushtkn], yeah. it helps. Thanks.

> Update the release year to 2019
> ---
>
> Key: HADOOP-16545
> URL: https://issues.apache.org/jira/browse/HADOOP-16545
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Zhankun Tang
>Priority: Critical
>
> Is doing the release. We need to update the release year from 2018 to 2019.
> {code:java}
> $ find . -name "pom.xml" | xargs grep -n 2018
> ./hadoop-project/pom.xml:34:2018
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15726) Create utility to limit frequency of log statements

2019-09-04 Thread Chen Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922982#comment-16922982
 ] 

Chen Zhang commented on HADOOP-15726:
-

Thanks [~xkrogen] for your response, I'd like to try to follow the enhancement 
of log throttle on read-lock, will file a Jira later.

> Create utility to limit frequency of log statements
> ---
>
> Key: HADOOP-15726
> URL: https://issues.apache.org/jira/browse/HADOOP-15726
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, util
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15726.000.patch, HADOOP-15726.001.patch, 
> HADOOP-15726.002.patch, HADOOP-15726.003.patch, 
> HDFS-15726-branch-3.0.003.patch
>
>
> There is a common pattern of logging a behavior that is normally extraneous. 
> Under some circumstances, such a behavior becomes common, flooding the logs 
> and making it difficult to see what else is going on in the system. Under 
> such situations it is beneficial to limit how frequently the extraneous 
> behavior is logged, while capturing some summary information about the 
> suppressed log statements.
> This is currently implemented in {{FSNamesystemLock}} (in HDFS-10713). We 
> have additional use cases for this in HDFS-13791, so this is a good time to 
> create a common utility for different sites to share this logic.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] RogPodge commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-04 Thread GitBox
RogPodge commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r321032755
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/DefaultRMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,82 @@
+package org.apache.hadoop.yarn.client;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.retry.DefaultFailoverProxyProvider;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+/**
+ * An implementation of {@link RMFailoverProxyProvider} which does nothing in 
the
+ * event of failover, and always returns the same proxy object.
+ * This is the default non-HA RM Failover proxy provider. It is used to replace
+ * {@Link DefaultFailoveProxyProvider} which was used as Yarn default non-HA.
+ */
+public class DefaultRMFailoverProxyProvider
 
 Review comment:
   Successfully refactored


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] RogPodge commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-04 Thread GitBox
RogPodge commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r321032722
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/ConfiguredRMFailoverProxyProvider.java
 ##
 @@ -25,8 +25,8 @@
 import java.util.HashMap;
 import java.util.Map;
 
-import org.slf4j.Logger;
 
 Review comment:
   No particular reason, I made the  changed the other proxy providers to use 
the slf4j logger for consistency


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1402: HADOOP-16547. make sure that s3guard prune sets up the FS

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1402: HADOOP-16547. make sure that s3guard 
prune sets up the FS
URL: https://github.com/apache/hadoop/pull/1402#issuecomment-528125833
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 3801 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1779 | trunk passed |
   | +1 | compile | 37 | trunk passed |
   | +1 | checkstyle | 30 | trunk passed |
   | +1 | mvnsite | 44 | trunk passed |
   | +1 | shadedclient | 871 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 33 | trunk passed |
   | 0 | spotbugs | 68 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 65 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 38 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | +1 | checkstyle | 20 | the patch passed |
   | +1 | mvnsite | 33 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 908 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | the patch passed |
   | +1 | findbugs | 75 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 94 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 8023 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1402 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 07c10077afc9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ae28747 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/1/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1401: HDDS-1561: Mark OPEN containers as QUASI_CLOSED as part of Ratis groupRemove

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1401: HDDS-1561: Mark OPEN containers as 
QUASI_CLOSED as part of Ratis groupRemove
URL: https://github.com/apache/hadoop/pull/1401#issuecomment-528105007
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 72 | Maven dependency ordering for branch |
   | +1 | mvninstall | 657 | trunk passed |
   | +1 | compile | 396 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 883 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 185 | trunk passed |
   | 0 | spotbugs | 440 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 675 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for patch |
   | +1 | mvninstall | 587 | the patch passed |
   | +1 | compile | 414 | the patch passed |
   | +1 | javac | 414 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-hdds: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   | +1 | findbugs | 662 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 259 | hadoop-hdds in the patch passed. |
   | -1 | unit | 187 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6315 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1401 |
   | JIRA Issue | HDDS-1561 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 47571461d1a2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 337e9b7 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/1/testReport/ |
   | Max. process+thread count | 1293 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/container-service hadoop-ozone 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1401/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1364: HDDS-1843. Undetectable corruption after 
restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#issuecomment-528124811
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 70 | Maven dependency ordering for branch |
   | +1 | mvninstall | 617 | trunk passed |
   | +1 | compile | 395 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 890 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 444 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 646 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for patch |
   | +1 | mvninstall | 556 | the patch passed |
   | +1 | compile | 391 | the patch passed |
   | +1 | cc | 391 | the patch passed |
   | +1 | javac | 391 | the patch passed |
   | +1 | checkstyle | 86 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 25 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 691 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | the patch passed |
   | +1 | findbugs | 669 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 288 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2037 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 8110 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1364 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc 
mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 8df624d8e13a 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 337e9b7 |
   | Default Java | 1.8.0_222 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/5/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/5/testReport/ |
   | Max. process+thread count | 5375 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1364/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-04 Thread GitBox
ajfabbri commented on a change in pull request #1359: 
HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359#discussion_r320999268
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DeleteOperation.java
 ##
 @@ -207,7 +211,7 @@ public DeleteOperation(final StoreContext context,
 "page size out of range: %d", pageSize);
 this.pageSize = pageSize;
 metadataStore = context.getMetadataStore();
-executor = context.createThrottledExecutor(2);
+executor = context.createThrottledExecutor(1);
 
 Review comment:
   what is the intention here? One batched delete at a time per-client?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-04 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r32176
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
 ##
 @@ -185,7 +190,7 @@ public int getNodeCount(NodeState nodestate) {
   @Override
   public NodeState getNodeState(DatanodeDetails datanodeDetails) {
 
 Review comment:
   We might want to write alternate version which take the operational status 
too ..since these calls are internal. Again, not something that need to be done 
in this patch. I am just writing down things as I see them. Please don't treat 
any of my suggests as a code review thought. More like, something that might be 
useful in the long run is more appropriate.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-04 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r320998407
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStatus.java
 ##
 @@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+
+import java.util.Objects;
+
+/**
+ * This class is used to capture the current status of a datanode. This
+ * includes its health (healthy, stale or dead) and its operation status (
+ * in_service, decommissioned and maintenance mode.
+ */
+public class NodeStatus {
+
+  private HddsProtos.NodeOperationalState operationalState;
+  private HddsProtos.NodeState health;
+
+  public NodeStatus(HddsProtos.NodeOperationalState operationalState,
+ HddsProtos.NodeState health) {
+this.operationalState = operationalState;
+this.health = health;
+  }
+
+  public static NodeStatus inServiceHealthy() {
+return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+HddsProtos.NodeState.HEALTHY);
 
 Review comment:
   Is there a reason to allocate this each time? just create a static one and 
return a reference to that, maybe? Not important at all, just wondering.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shanyu commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-04 Thread GitBox
shanyu commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r320999337
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/ConfiguredRMFailoverProxyProvider.java
 ##
 @@ -25,8 +25,8 @@
 import java.util.HashMap;
 import java.util.Map;
 
-import org.slf4j.Logger;
 
 Review comment:
   why modifying this file?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1400: HDDS-2079. Fix TestSecureOzoneManager. Contributed by Xiaoyu Yao.

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1400: HDDS-2079. Fix TestSecureOzoneManager. 
Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1400#issuecomment-528118565
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 614 | trunk passed |
   | +1 | compile | 386 | trunk passed |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 876 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 418 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 613 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 535 | the patch passed |
   | +1 | compile | 385 | the patch passed |
   | +1 | javac | 385 | the patch passed |
   | +1 | checkstyle | 89 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | the patch passed |
   | +1 | findbugs | 632 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 294 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1347 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7181 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1400/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1400 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5537bc1b3785 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 337e9b7 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1400/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1400/2/testReport/ |
   | Max. process+thread count | 5400 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1400/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-04 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r320999602
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
 ##
 @@ -151,7 +152,9 @@ private void unregisterMXBean() {
*/
   @Override
   public List getNodes(NodeState nodestate) {
-return nodeStateManager.getNodes(nodestate).stream()
+return nodeStateManager.getNodes(
+new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE, nodestate))
+.stream()
 .map(node -> (DatanodeDetails)node).collect(Collectors.toList());
   }
 
 Review comment:
   In the final patch, should change the Node Query Function? so we can say, 
get me all the nodes that are in service and healthy, or all nodes in 
maintenance mode but dead? Let us Add that feature when we need it. I am ok 
with all operations mapping to IN_SERVICE for now. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error

2019-09-04 Thread shanyu zhao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922904#comment-16922904
 ] 

shanyu zhao commented on HADOOP-16543:
--

Hi [~ste...@apache.org], thanks for your suggestions. 

1) We've tried changing DNS TTL with no luck
2) The problem is due to Hadoop's RMProxy caches the InetSocketAddress, then 
retry connecting to the IP address.
{code:java}
InetSocketAddress rmAddress = rmProxy.getRMAddress(conf, protocol); {code}
The fix is to create these additional FailoverProxyProvider:

For Non-HA senario:

- DefaultNoHaRMFailoverProxyProvider (without doing DNS resolution)
- AutoRefreshNoHaRMFailoverProxyProvider (do DNS resolution during retries)

For HA scenario:

- ConfiguredRMFailoverProxyProvider (without doing DNS resolution)
- AutoRefreshRMFailoverProxyProvider (do DNS resolution during retries in HA 
scenario)

And add this configuration to cover non-ha mode config (in addition to 
yarn.client.failover-proxy-provider):

yarn.client.failover-no-ha-proxy-provider

 

 

> Cached DNS name resolution error
> 
>
> Key: HADOOP-16543
> URL: https://issues.apache.org/jira/browse/HADOOP-16543
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: Roger Liu
>Priority: Major
>
> In Kubernetes, the a node may go down and then come back later with a 
> different IP address. Yarn clients which are already running will be unable 
> to rediscover the node after it comes back up due to caching the original IP 
> address. This is problematic for cases such as Spark HA on Kubernetes, as the 
> node containing the resource manager may go down and come back up, meaning 
> existing node managers must then also be restarted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16430) S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-04 Thread Aaron Fabbri (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922903#comment-16922903
 ] 

Aaron Fabbri commented on HADOOP-16430:
---

Latest PR commits reviewed (+1). Aside: This is nice not having to run diff on 
a diff to just see what changed. Nice to be able to follow your commit history 
in the PR.

> S3AFilesystem.delete to incrementally update s3guard with deletions
> ---
>
> Key: HADOOP-16430
> URL: https://issues.apache.org/jira/browse/HADOOP-16430
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: Screenshot 2019-07-16 at 22.08.31.png
>
>
> Currently S3AFilesystem.delete() only updates the delete at the end of a 
> paged delete operation. This makes it slow when there are many thousands of 
> files to delete ,and increases the window of vulnerability to failures
> Preferred
> * after every bulk DELETE call is issued to S3, queue the (async) delete of 
> all entries in that post.
> * at the end of the delete, await the completion of these operations.
> * inside S3AFS, also do the delete across threads, so that different HTTPS 
> connections can be used.
> This should maximise DDB throughput against tables which aren't IO limited.
> When executed against small IOP limited tables, the parallel DDB DELETE 
> batches will trigger a lot of throttling events; we should make sure these 
> aren't going to trigger failures



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shanyu commented on a change in pull request #1399: HADOOP-16543: Cached DNS name resolution error

2019-09-04 Thread GitBox
shanyu commented on a change in pull request #1399: HADOOP-16543: Cached DNS 
name resolution error
URL: https://github.com/apache/hadoop/pull/1399#discussion_r321000480
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/DefaultRMFailoverProxyProvider.java
 ##
 @@ -0,0 +1,82 @@
+package org.apache.hadoop.yarn.client;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.retry.DefaultFailoverProxyProvider;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+/**
+ * An implementation of {@link RMFailoverProxyProvider} which does nothing in 
the
+ * event of failover, and always returns the same proxy object.
+ * This is the default non-HA RM Failover proxy provider. It is used to replace
+ * {@Link DefaultFailoveProxyProvider} which was used as Yarn default non-HA.
+ */
+public class DefaultRMFailoverProxyProvider
 
 Review comment:
   For consistency, we should name this DefaultNoHaRMFailoverProxyProvider


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-04 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r320994828
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -219,47 +221,51 @@ private void initialiseState2EventMap() {
*  |   |  | |
*  V   V  | |
* [HEALTHY]--->[STALE]--->[DEAD]
-   *| (TIMEOUT)  | (TIMEOUT)   |
-   *|| |
-   *|| |
-   *|| |
-   *|| |
-   *| (DECOMMISSION) | (DECOMMISSION)  | (DECOMMISSION)
-   *|V |
-   *+--->[DECOMMISSIONING]<+
-   * |
-   * | (DECOMMISSIONED)
-   * |
-   * V
-   *  [DECOMMISSIONED]
*
*/
 
   /**
* Initializes the lifecycle of node state machine.
*/
-  private void initializeStateMachine() {
-stateMachine.addTransition(
+  private void initializeStateMachines() {
+nodeHealthSM.addTransition(
 NodeState.HEALTHY, NodeState.STALE, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.DEAD, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.HEALTHY, NodeLifeCycleEvent.RESTORE);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.DEAD, NodeState.HEALTHY, NodeLifeCycleEvent.RESURRECT);
-stateMachine.addTransition(
-NodeState.HEALTHY, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.STALE, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DEAD, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DECOMMISSIONING, NodeState.DECOMMISSIONED,
-NodeLifeCycleEvent.DECOMMISSIONED);
 
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_SERVICE, NodeOperationalState.DECOMMISSIONING,
+NodeOperationStateEvent.START_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING,
+NodeOperationalState.DECOMMISSIONED,
+NodeOperationStateEvent.COMPLETE_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONED, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
 
 Review comment:
   What happens when we do this ? is this a new node? or do we pick up from 
where we left off, say if there are containers on this machine, they are 
treated as part of the system?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-04 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r320995703
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -219,47 +221,51 @@ private void initialiseState2EventMap() {
*  |   |  | |
*  V   V  | |
* [HEALTHY]--->[STALE]--->[DEAD]
-   *| (TIMEOUT)  | (TIMEOUT)   |
-   *|| |
-   *|| |
-   *|| |
-   *|| |
-   *| (DECOMMISSION) | (DECOMMISSION)  | (DECOMMISSION)
-   *|V |
-   *+--->[DECOMMISSIONING]<+
-   * |
-   * | (DECOMMISSIONED)
-   * |
-   * V
-   *  [DECOMMISSIONED]
*
*/
 
   /**
* Initializes the lifecycle of node state machine.
*/
-  private void initializeStateMachine() {
-stateMachine.addTransition(
+  private void initializeStateMachines() {
+nodeHealthSM.addTransition(
 NodeState.HEALTHY, NodeState.STALE, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.DEAD, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.HEALTHY, NodeLifeCycleEvent.RESTORE);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.DEAD, NodeState.HEALTHY, NodeLifeCycleEvent.RESURRECT);
-stateMachine.addTransition(
-NodeState.HEALTHY, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.STALE, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DEAD, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DECOMMISSIONING, NodeState.DECOMMISSIONED,
-NodeLifeCycleEvent.DECOMMISSIONED);
 
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_SERVICE, NodeOperationalState.DECOMMISSIONING,
+NodeOperationStateEvent.START_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING,
+NodeOperationalState.DECOMMISSIONED,
+NodeOperationStateEvent.COMPLETE_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONED, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_SERVICE,
+NodeOperationalState.ENTERING_MAINTENANCE,
+NodeOperationStateEvent.START_MAINTENANCE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.ENTERING_MAINTENANCE,
+NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.ENTERING_MAINTENANCE,
+NodeOperationalState.IN_MAINTENANCE,
+NodeOperationStateEvent.ENTER_MAINTENANCE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_MAINTENANCE, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
 
 Review comment:
   How do we handle the edge of timeOut, Maintenance might have time out -- 
that is I put the maintenance for one day and forget about it. Or is that 
handled outside the state machine?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-04 Thread GitBox
ajfabbri commented on a change in pull request #1359: 
HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359#discussion_r320993759
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextURIBase.java
 ##
 @@ -418,7 +419,7 @@ public void testDeleteDirectory() throws IOException {
 
   @Test
   public void testDeleteNonExistingDirectory() throws IOException {
-String testDirName = "testFile";
+String testDirName = "testDeleteNonExistingDirectory";
 
 Review comment:
   A little easier to debug leftovers, huh?  Missing C languages' 
```__func__``` macro here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-04 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r320997400
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -578,39 +587,33 @@ private void checkNodesHealth() {
 Predicate deadNodeCondition =
 (lastHbTime) -> lastHbTime < staleNodeDeadline;
 try {
-  for (NodeState state : NodeState.values()) {
-List nodes = nodeStateMap.getNodes(state);
-for (UUID id : nodes) {
-  DatanodeInfo node = nodeStateMap.getNodeInfo(id);
-  switch (state) {
-  case HEALTHY:
-// Move the node to STALE if the last heartbeat time is less than
-// configured stale-node interval.
-updateNodeState(node, staleNodeCondition, state,
-  NodeLifeCycleEvent.TIMEOUT);
-break;
-  case STALE:
-// Move the node to DEAD if the last heartbeat time is less than
-// configured dead-node interval.
-updateNodeState(node, deadNodeCondition, state,
-NodeLifeCycleEvent.TIMEOUT);
-// Restore the node if we have received heartbeat before configured
-// stale-node interval.
-updateNodeState(node, healthyNodeCondition, state,
-NodeLifeCycleEvent.RESTORE);
-break;
-  case DEAD:
-// Resurrect the node if we have received heartbeat before
-// configured stale-node interval.
-updateNodeState(node, healthyNodeCondition, state,
-NodeLifeCycleEvent.RESURRECT);
-break;
-// We don't do anything for DECOMMISSIONING and DECOMMISSIONED in
-// heartbeat processing.
-  case DECOMMISSIONING:
-  case DECOMMISSIONED:
-  default:
-  }
+  for(DatanodeInfo node : nodeStateMap.getAllDatanodeInfos()) {
+NodeState state =
+nodeStateMap.getNodeStatus(node.getUuid()).getHealth();
+switch (state) {
+case HEALTHY:
+  // Move the node to STALE if the last heartbeat time is less than
+  // configured stale-node interval.
+  updateNodeState(node, staleNodeCondition, state,
+  NodeLifeCycleEvent.TIMEOUT);
+  break;
+case STALE:
+  // Move the node to DEAD if the last heartbeat time is less than
+  // configured dead-node interval.
+  updateNodeState(node, deadNodeCondition, state,
+  NodeLifeCycleEvent.TIMEOUT);
+  // Restore the node if we have received heartbeat before configured
+  // stale-node interval.
+  updateNodeState(node, healthyNodeCondition, state,
+  NodeLifeCycleEvent.RESTORE);
+  break;
+case DEAD:
+  // Resurrect the node if we have received heartbeat before
+  // configured stale-node interval.
+  updateNodeState(node, healthyNodeCondition, state,
+  NodeLifeCycleEvent.RESURRECT);
+  break;
+default:
 }
 
 Review comment:
   Not sure why we need this loop change, but it does make code reading simpler.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-04 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r320996724
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -426,18 +432,20 @@ public int getStaleNodeCount() {
* @return dead node count
*/
   public int getDeadNodeCount() {
-return getNodeCount(NodeState.DEAD);
+// TODO - hard coded IN_SERVICE
+return getNodeCount(
+new NodeStatus(NodeOperationalState.IN_SERVICE, NodeState.DEAD));
 
 Review comment:
   Interesting; what happens to a node in maintenance mode, but switched off? 
or dead? Does that become a dead node? I think I agree with your conclusion 
that it is not a dead node, but flagging for others to consider.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-04 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r320995341
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
 ##
 @@ -219,47 +221,51 @@ private void initialiseState2EventMap() {
*  |   |  | |
*  V   V  | |
* [HEALTHY]--->[STALE]--->[DEAD]
-   *| (TIMEOUT)  | (TIMEOUT)   |
-   *|| |
-   *|| |
-   *|| |
-   *|| |
-   *| (DECOMMISSION) | (DECOMMISSION)  | (DECOMMISSION)
-   *|V |
-   *+--->[DECOMMISSIONING]<+
-   * |
-   * | (DECOMMISSIONED)
-   * |
-   * V
-   *  [DECOMMISSIONED]
*
*/
 
   /**
* Initializes the lifecycle of node state machine.
*/
-  private void initializeStateMachine() {
-stateMachine.addTransition(
+  private void initializeStateMachines() {
+nodeHealthSM.addTransition(
 NodeState.HEALTHY, NodeState.STALE, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.DEAD, NodeLifeCycleEvent.TIMEOUT);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.STALE, NodeState.HEALTHY, NodeLifeCycleEvent.RESTORE);
-stateMachine.addTransition(
+nodeHealthSM.addTransition(
 NodeState.DEAD, NodeState.HEALTHY, NodeLifeCycleEvent.RESURRECT);
-stateMachine.addTransition(
-NodeState.HEALTHY, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.STALE, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DEAD, NodeState.DECOMMISSIONING,
-NodeLifeCycleEvent.DECOMMISSION);
-stateMachine.addTransition(
-NodeState.DECOMMISSIONING, NodeState.DECOMMISSIONED,
-NodeLifeCycleEvent.DECOMMISSIONED);
 
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_SERVICE, NodeOperationalState.DECOMMISSIONING,
+NodeOperationStateEvent.START_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONING,
+NodeOperationalState.DECOMMISSIONED,
+NodeOperationStateEvent.COMPLETE_DECOMMISSION);
+nodeOpStateSM.addTransition(
+NodeOperationalState.DECOMMISSIONED, NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+
+nodeOpStateSM.addTransition(
+NodeOperationalState.IN_SERVICE,
+NodeOperationalState.ENTERING_MAINTENANCE,
+NodeOperationStateEvent.START_MAINTENANCE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.ENTERING_MAINTENANCE,
+NodeOperationalState.IN_SERVICE,
+NodeOperationStateEvent.RETURN_TO_SERVICE);
+nodeOpStateSM.addTransition(
+NodeOperationalState.ENTERING_MAINTENANCE,
+NodeOperationalState.IN_MAINTENANCE,
+NodeOperationStateEvent.ENTER_MAINTENANCE);
 
 Review comment:
   From an English point of view, this is slightly confusing. But I see why :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1344: HDDS-1982 Extend SCMNodeManager to support decommission and maintenance states

2019-09-04 Thread GitBox
anuengineer commented on a change in pull request #1344: HDDS-1982 Extend 
SCMNodeManager to support decommission and maintenance states
URL: https://github.com/apache/hadoop/pull/1344#discussion_r320998604
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStatus.java
 ##
 @@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+
+import java.util.Objects;
+
+/**
+ * This class is used to capture the current status of a datanode. This
+ * includes its health (healthy, stale or dead) and its operation status (
+ * in_service, decommissioned and maintenance mode.
+ */
+public class NodeStatus {
+
+  private HddsProtos.NodeOperationalState operationalState;
+  private HddsProtos.NodeState health;
+
+  public NodeStatus(HddsProtos.NodeOperationalState operationalState,
+ HddsProtos.NodeState health) {
+this.operationalState = operationalState;
+this.health = health;
+  }
+
+  public static NodeStatus inServiceHealthy() {
+return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+HddsProtos.NodeState.HEALTHY);
+  }
+
+  public static NodeStatus inServiceStale() {
+return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+HddsProtos.NodeState.STALE);
+  }
+
+  public static NodeStatus inServiceDead() {
+return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+HddsProtos.NodeState.DEAD);
+  }
+
 
 Review comment:
   I am presuming that you have to define the whole cross product at some 
point, but right now this is all we need?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ctubbsii commented on a change in pull request #473: HADOOP-11223. Create UnmodifiableConfiguration

2019-09-04 Thread GitBox
ctubbsii commented on a change in pull request #473: HADOOP-11223. Create 
UnmodifiableConfiguration
URL: https://github.com/apache/hadoop/pull/473#discussion_r320995795
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/UnmodifiableConfiguration.java
 ##
 @@ -0,0 +1,545 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.core.conf;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.io.Reader;
+import java.io.Writer;
+import java.net.InetSocketAddress;
+import java.net.URL;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.StorageUnit;
+import org.apache.hadoop.fs.Path;
+
+import com.google.common.collect.Iterators;
+
+/**
+ * An unmodifiable view of a Configuration.
+ */
+public class UnmodifiableConfiguration extends Configuration {
+
+  final Configuration other;
+
+  public UnmodifiableConfiguration(Configuration other) {
+super(false);
+this.other = other;
+  }
+
+  @Override
+  public Iterator> iterator() {
+return Iterators.unmodifiableIterator(other.iterator());
+  }
+
+  @Override
+  public String get(String name) {
+return this.other.get(name);
+  }
+
+  @Override
+  public boolean onlyKeyExists(String name) {
+return this.other.onlyKeyExists(name);
+  }
+
+  @Override
+  public String getTrimmed(String name) {
+return this.other.getTrimmed(name);
+  }
+
+  @Override
+  public String getTrimmed(String name, String defaultValue) {
+return this.other.getTrimmed(name, defaultValue);
+  }
+
+  @Override
+  public String getRaw(String name) {
+return this.other.getRaw(name);
+  }
+
+  @Override
+  public String get(String name, String defaultValue) {
+return this.other.get(name, defaultValue);
+  }
+
+  @Override
+  public int getInt(String name, int defaultValue) {
+return this.other.getInt(name, defaultValue);
+  }
+
+  @Override
+  public int[] getInts(String name) {
+return this.other.getInts(name);
+  }
+
+  @Override
+  public long getLong(String name, long defaultValue) {
+return this.other.getLong(name, defaultValue);
+  }
+
+  @Override
+  public long getLongBytes(String name, long defaultValue) {
+return this.other.getLongBytes(name, defaultValue);
+  }
+
+  @Override
+  public float getFloat(String name, float defaultValue) {
+return this.other.getFloat(name, defaultValue);
+  }
+
+  @Override
+  public double getDouble(String name, double defaultValue) {
+return this.other.getDouble(name, defaultValue);
+  }
+
+  @Override
+  public boolean getBoolean(String name, boolean defaultValue) {
+return this.other.getBoolean(name, defaultValue);
+  }
+
+  @Override
+  public > T getEnum(String name, T defaultValue) {
+return this.other.getEnum(name, defaultValue);
+  }
+
+  @Override
+  public long getTimeDuration(String name, long defaultValue, TimeUnit unit) {
+return this.other.getTimeDuration(name, defaultValue, unit);
+  }
+
+  @Override
+  public long getTimeDuration(String name, String defaultValue, TimeUnit unit) 
{
+return this.other.getTimeDuration(name, defaultValue, unit);
+  }
+
+  @Override
+  public long getTimeDurationHelper(String name, String vStr, TimeUnit unit) {
+return this.other.getTimeDurationHelper(name, vStr, unit);
+  }
+
+  @Override
+  public long[] getTimeDurations(String name, TimeUnit unit) {
+return this.other.getTimeDurations(name, unit);
+  }
+
+  @Override
+  public double getStorageSize(String name, String defaultValue, StorageUnit 
targetUnit) {
+return this.other.getStorageSize(name, defaultValue, targetUnit);
+  }
+
+  @Override
+  public double getStorageSize(String name, double defaultValue, StorageUnit 
targetUnit) {
+return this.other.getStorageSize(name, defaultValue, targetUnit);
+  }
+
+  @Override
+  public Pattern getPattern(String name, 

[GitHub] [hadoop] ctubbsii commented on a change in pull request #473: HADOOP-11223. Create UnmodifiableConfiguration

2019-09-04 Thread GitBox
ctubbsii commented on a change in pull request #473: HADOOP-11223. Create 
UnmodifiableConfiguration
URL: https://github.com/apache/hadoop/pull/473#discussion_r320995795
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/UnmodifiableConfiguration.java
 ##
 @@ -0,0 +1,545 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.core.conf;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.io.Reader;
+import java.io.Writer;
+import java.net.InetSocketAddress;
+import java.net.URL;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.StorageUnit;
+import org.apache.hadoop.fs.Path;
+
+import com.google.common.collect.Iterators;
+
+/**
+ * An unmodifiable view of a Configuration.
+ */
+public class UnmodifiableConfiguration extends Configuration {
+
+  final Configuration other;
+
+  public UnmodifiableConfiguration(Configuration other) {
+super(false);
+this.other = other;
+  }
+
+  @Override
+  public Iterator> iterator() {
+return Iterators.unmodifiableIterator(other.iterator());
+  }
+
+  @Override
+  public String get(String name) {
+return this.other.get(name);
+  }
+
+  @Override
+  public boolean onlyKeyExists(String name) {
+return this.other.onlyKeyExists(name);
+  }
+
+  @Override
+  public String getTrimmed(String name) {
+return this.other.getTrimmed(name);
+  }
+
+  @Override
+  public String getTrimmed(String name, String defaultValue) {
+return this.other.getTrimmed(name, defaultValue);
+  }
+
+  @Override
+  public String getRaw(String name) {
+return this.other.getRaw(name);
+  }
+
+  @Override
+  public String get(String name, String defaultValue) {
+return this.other.get(name, defaultValue);
+  }
+
+  @Override
+  public int getInt(String name, int defaultValue) {
+return this.other.getInt(name, defaultValue);
+  }
+
+  @Override
+  public int[] getInts(String name) {
+return this.other.getInts(name);
+  }
+
+  @Override
+  public long getLong(String name, long defaultValue) {
+return this.other.getLong(name, defaultValue);
+  }
+
+  @Override
+  public long getLongBytes(String name, long defaultValue) {
+return this.other.getLongBytes(name, defaultValue);
+  }
+
+  @Override
+  public float getFloat(String name, float defaultValue) {
+return this.other.getFloat(name, defaultValue);
+  }
+
+  @Override
+  public double getDouble(String name, double defaultValue) {
+return this.other.getDouble(name, defaultValue);
+  }
+
+  @Override
+  public boolean getBoolean(String name, boolean defaultValue) {
+return this.other.getBoolean(name, defaultValue);
+  }
+
+  @Override
+  public > T getEnum(String name, T defaultValue) {
+return this.other.getEnum(name, defaultValue);
+  }
+
+  @Override
+  public long getTimeDuration(String name, long defaultValue, TimeUnit unit) {
+return this.other.getTimeDuration(name, defaultValue, unit);
+  }
+
+  @Override
+  public long getTimeDuration(String name, String defaultValue, TimeUnit unit) 
{
+return this.other.getTimeDuration(name, defaultValue, unit);
+  }
+
+  @Override
+  public long getTimeDurationHelper(String name, String vStr, TimeUnit unit) {
+return this.other.getTimeDurationHelper(name, vStr, unit);
+  }
+
+  @Override
+  public long[] getTimeDurations(String name, TimeUnit unit) {
+return this.other.getTimeDurations(name, unit);
+  }
+
+  @Override
+  public double getStorageSize(String name, String defaultValue, StorageUnit 
targetUnit) {
+return this.other.getStorageSize(name, defaultValue, targetUnit);
+  }
+
+  @Override
+  public double getStorageSize(String name, double defaultValue, StorageUnit 
targetUnit) {
+return this.other.getStorageSize(name, defaultValue, targetUnit);
+  }
+
+  @Override
+  public Pattern getPattern(String name, 

[GitHub] [hadoop] smengcl commented on issue #1398: HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-04 Thread GitBox
smengcl commented on issue #1398: HDDS-2064. 
OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
incorrectly
URL: https://github.com/apache/hadoop/pull/1398#issuecomment-528103459
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1400: HDDS-2079. Fix TestSecureOzoneManager. Contributed by Xiaoyu Yao.

2019-09-04 Thread GitBox
xiaoyuyao commented on issue #1400: HDDS-2079. Fix TestSecureOzoneManager. 
Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1400#issuecomment-528086307
 
 
   Merge based on @ChenSammi's +1. Thanks for the review. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao merged pull request #1400: HDDS-2079. Fix TestSecureOzoneManager. Contributed by Xiaoyu Yao.

2019-09-04 Thread GitBox
xiaoyuyao merged pull request #1400: HDDS-2079. Fix TestSecureOzoneManager. 
Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1400
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-04 Thread GitBox
ajayydv commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt 
key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r320959255
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2647,4 +2648,89 @@ private void completeMultipartUpload(OzoneBucket 
bucket, String keyName,
 Assert.assertEquals(omMultipartUploadCompleteInfo.getKey(), keyName);
 Assert.assertNotNull(omMultipartUploadCompleteInfo.getHash());
   }
+
+  /**
+   * Tests GDPR encryption/decryption.
+   * 1. Create GDPR Enabled bucket.
+   * 2. Create a Key in this bucket so it gets encrypted via GDPRSymmetricKey.
+   * 3. Read key and validate the content/metadata is as expected because the
+   * readKey will decrypt using the GDPR Symmetric Key with details from 
KeyInfo
+   * Metadata.
+   * 4. To check encryption, we forcibly update KeyInfo Metadata and remove the
 
 Review comment:
   I am also not sure if metadata updats are exposed to clients via rpc or 
shell. Maybe @anuengineer or @xiaoyuyao can chime in. 
   > Also, this is a test class to simulate that encryption is working.
   I am also not talking about this test class but actual BucketImpl class 
which handles bucket metadata operations.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-04 Thread GitBox
ajayydv commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt 
key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r320959255
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2647,4 +2648,89 @@ private void completeMultipartUpload(OzoneBucket 
bucket, String keyName,
 Assert.assertEquals(omMultipartUploadCompleteInfo.getKey(), keyName);
 Assert.assertNotNull(omMultipartUploadCompleteInfo.getHash());
   }
+
+  /**
+   * Tests GDPR encryption/decryption.
+   * 1. Create GDPR Enabled bucket.
+   * 2. Create a Key in this bucket so it gets encrypted via GDPRSymmetricKey.
+   * 3. Read key and validate the content/metadata is as expected because the
+   * readKey will decrypt using the GDPR Symmetric Key with details from 
KeyInfo
+   * Metadata.
+   * 4. To check encryption, we forcibly update KeyInfo Metadata and remove the
 
 Review comment:
   I am also not sure if metadata updats are exposed to clients via rpc or 
shell. Maybe @anuengineer or @xiaoyuyao can chime in. 
   
   > > Also, this is a test class to simulate that encryption is working.
   
   I am also not talking about this test class but actual BucketImpl class 
which handles bucket metadata operations.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on a change in pull request #1398: HDDS-2064. OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured incorrectly

2019-09-04 Thread GitBox
smengcl commented on a change in pull request #1398: HDDS-2064. 
OzoneManagerRatisServer#newOMRatisServer throws NPE when OM HA is configured 
incorrectly
URL: https://github.com/apache/hadoop/pull/1398#discussion_r320956763
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -614,6 +615,12 @@ private void loadOMHAConfigs(Configuration conf) {
 " system with " + OZONE_OM_SERVICE_IDS_KEY + " and " +
 OZONE_OM_ADDRESS_KEY;
 throw new OzoneIllegalArgumentException(msg);
+  } else if (!isOMAddressSet && found == 0) {
 
 Review comment:
   @bharatviswa504 Thanks for the review. You are right. Now I understand that 
it makes sense to walk through all configured service ids before declaring 
failure.
   
   I should probably just put `found == 0` check outside the `serviceId` loop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1359: HADOOP-16430.S3AFilesystem.delete to 
incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359#issuecomment-528064400
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 18 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1058 | trunk passed |
   | +1 | compile | 1016 | trunk passed |
   | +1 | checkstyle | 140 | trunk passed |
   | +1 | mvnsite | 129 | trunk passed |
   | +1 | shadedclient | 989 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 106 | trunk passed |
   | 0 | spotbugs | 68 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 188 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 81 | the patch passed |
   | +1 | compile | 973 | the patch passed |
   | +1 | javac | 973 | the patch passed |
   | -0 | checkstyle | 148 | root: The patch generated 2 new + 93 unchanged - 5 
fixed = 95 total (was 98) |
   | +1 | mvnsite | 149 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 711 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 39 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | +1 | findbugs | 203 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 549 | hadoop-common in the patch passed. |
   | +1 | unit | 99 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 6835 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1359 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1c11fbfaeee9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 337e9b7 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/7/artifact/out/diff-checkstyle-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/7/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/7/testReport/ |
   | Max. process+thread count | 1570 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-04 Thread GitBox
dineshchitlangia commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r320897464
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2647,4 +2648,89 @@ private void completeMultipartUpload(OzoneBucket 
bucket, String keyName,
 Assert.assertEquals(omMultipartUploadCompleteInfo.getKey(), keyName);
 Assert.assertNotNull(omMultipartUploadCompleteInfo.getHash());
   }
+
+  /**
+   * Tests GDPR encryption/decryption.
+   * 1. Create GDPR Enabled bucket.
+   * 2. Create a Key in this bucket so it gets encrypted via GDPRSymmetricKey.
+   * 3. Read key and validate the content/metadata is as expected because the
+   * readKey will decrypt using the GDPR Symmetric Key with details from 
KeyInfo
+   * Metadata.
+   * 4. To check encryption, we forcibly update KeyInfo Metadata and remove the
 
 Review comment:
   AFAIK, we do not provide an option to end users for updating metadata.
   At best, we provide options to create, delete, info, list, add/remove ACL, 
get/set ACL.
   
   Also, this is a test class to simulate that encryption is working.
   
   That said, we provide option to enable gdpr for a bucket during creation in 
HDDS-2016. So, if a user is not admin/owner, they cannot enable the gdpr flag.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1208: HADOOP-16423. S3Guard fsck: Check 
metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-528052580
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1226 | trunk passed |
   | +1 | compile | 39 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 44 | trunk passed |
   | +1 | shadedclient | 965 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | trunk passed |
   | 0 | spotbugs | 69 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 67 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 38 | the patch passed |
   | +1 | compile | 31 | the patch passed |
   | +1 | javac | 31 | the patch passed |
   | -0 | checkstyle | 22 | hadoop-tools/hadoop-aws: The patch generated 26 new 
+ 29 unchanged - 0 fixed = 55 total (was 29) |
   | +1 | mvnsite | 36 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 973 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | the patch passed |
   | +1 | findbugs | 81 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 94 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 3860 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1208 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 67ff0b461ba1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 337e9b7 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/16/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/16/testReport/ |
   | Max. process+thread count | 358 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1208/16/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1361: HDDS-1553. Add metrics in rack aware container placement policy.

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1361: HDDS-1553. Add metrics in rack aware 
container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#issuecomment-528050048
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 588 | trunk passed |
   | +1 | compile | 405 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 913 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | trunk passed |
   | 0 | spotbugs | 471 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 690 | trunk passed |
   | -0 | patch | 521 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for patch |
   | +1 | mvninstall | 581 | the patch passed |
   | +1 | compile | 406 | the patch passed |
   | +1 | javac | 406 | the patch passed |
   | +1 | checkstyle | 89 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 776 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | the patch passed |
   | +1 | findbugs | 689 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 286 | hadoop-hdds in the patch passed. |
   | -1 | unit | 192 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 6408 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1361/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1361 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 897d688e963c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 337e9b7 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1361/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1361/4/testReport/ |
   | Max. process+thread count | 1346 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1361/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16542) Update commons-beanutils version

2019-09-04 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922766#comment-16922766
 ] 

Wei-Chiu Chuang commented on HADOOP-16542:
--

Actually, I tried to remove it from Hadoop, and then built upstream 
applications, but it fails to build Hive:

 

So I'm sorry but -1 to remove this entirely. Looks like we have to update it 
instead of removing it.

 
{noformat}

2019-09-03 19:11:46.955312 [INFO] 

2019-09-03 19:11:46.955323 [INFO] BUILD FAILURE
2019-09-03 19:11:46.955335 [INFO] 

2019-09-03 19:11:46.955407 [INFO] Total time: 26.580 s
2019-09-03 19:11:46.955507 [INFO] Finished at: 2019-09-04T02:11:46Z
2019-09-03 19:11:47.316910 [INFO] Final Memory: 70M/707M
2019-09-03 19:11:47.316974 [INFO] 

2019-09-03 19:11:47.317083 [WARNING] The requested profile "hadoop-2" could not 
be activated because it does not exist.
2019-09-03 19:11:47.317813 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile (default-compile) 
on project hive-metastore: Compilation failure
2019-09-03 19:11:47.317836 [ERROR] 
/container.common/build/cdh/hive/2.1.1-cdh6.x-SNAPSHOT/source/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java:[57,36]
 package org.apache.commons.beanutils does not exist
2019-09-03 19:11:47.317845 [ERROR] -> [Help 1]
2019-09-03 19:11:47.317855 [ERROR] 
2019-09-03 19:11:47.317863 [ERROR] To see the full stack trace of the errors, 
re-run Maven with the -e switch.
2019-09-03 19:11:47.317872 [ERROR] Re-run Maven using the -X switch to enable 
full debug logging.
2019-09-03 19:11:47.317880 [ERROR] 
2019-09-03 19:11:47.317888 [ERROR] For more information about the errors and 
possible solutions, please read the following articles:
2019-09-03 19:11:47.317903 [ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
2019-09-03 19:11:47.317912 [ERROR] 
2019-09-03 19:11:47.317920 [ERROR] After correcting the problems, you can 
resume the build with the command
 {noformat}

> Update commons-beanutils version
> 
>
> Key: HADOOP-16542
> URL: https://issues.apache.org/jira/browse/HADOOP-16542
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.10.0, 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: kevin su
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16542.001.patch, HADOOP-16542.002.patch
>
>
> [http://mail-archives.apache.org/mod_mbox/www-announce/201908.mbox/%3cc628798f-315d-4428-8cb1-4ed1ecc95...@apache.org%3e]
>  {quote}
> CVE-2019-10086. Apache Commons Beanutils does not suppresses the class 
> property in PropertyUtilsBean
> by default.
> Severity: Medium
> Vendor: The Apache Software Foundation
> Versions Affected: commons-beanutils-1.9.3 and earlier
> Description: A special BeanIntrospector class was added in version 1.9.2.
> This can be used to stop attackers from using the class property of
> Java objects to get access to the classloader.
> However this protection was not enabled by default.
> PropertyUtilsBean (and consequently BeanUtilsBean) now disallows class
> level property access by default, thus protecting against
> CVE-2014-0114.
> Mitigation: 1.X users should migrate to 1.9.4.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15565) ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak.

2019-09-04 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922760#comment-16922760
 ] 

Erik Krogen commented on HADOOP-15565:
--

Hi [~LiJinglun], sorry for my delayed response, I just returned from vacation 
recently. Here is another review. Things look great overall, the comments are 
mostly minor.

* Can we make the log message in {{ViewFileSystem}} L119 more informative? If 
you were only to look at the logs and not the code, "close failed" would be 
confusing.
* Can you add some comments on {{ViewFileSystem}} L292 explaining why the cache 
can be immutable, on {{InnerCache}} explaining why this cache is necessary 
(maybe just a reference to this JIRA), and on {{InnerCache.Key}} describing why 
it is okay to use a simple key here (as we discussed previously, no need for 
UGI)?
* The tests in {{TestViewFileSystemHdfs}} LGTM, but I don't think they are 
HDFS-specific. Can we put them in {{ViewFileSystemBaseTest}}? Also you have one 
typo, {{testViewFilsSystemInnerCache}} should be 
{{testViewFileSystemInnerCache}}
* Can you describe why the changes in {{TestViewFsDefaultValue}} are necessary?
* Can you explain why the changes in 
{{TestViewFileSystemDelegationTokenSupport}} are necessary? Same for 
{{TestViewFileSystemDelegation}} -- it seems like the old way of returning the 
created {{fs}} was cleaner? I also don't understand the need for changes in 
{{testSanity()}} -- does the string comparison no longer work?

> ViewFileSystem.close doesn't close child filesystems and causes FileSystem 
> objects leak.
> 
>
> Key: HADOOP-15565
> URL: https://issues.apache.org/jira/browse/HADOOP-15565
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HADOOP-15565.0001.patch, HADOOP-15565.0002.patch, 
> HADOOP-15565.0003.patch, HADOOP-15565.0004.patch, HADOOP-15565.0005.patch, 
> HADOOP-15565.0006.patch
>
>
> ViewFileSystem.close() does nothing but remove itself from FileSystem.CACHE. 
> It's children filesystems are cached in FileSystem.CACHE and shared by all 
> the ViewFileSystem instances. We could't simply close all the children 
> filesystems because it will break the semantic of FileSystem.newInstance().
> We might add an inner cache to ViewFileSystem, let it cache all the children 
> filesystems. The children filesystems are not shared any more. When 
> ViewFileSystem is closed we close all the children filesystems in the inner 
> cache. The ViewFileSystem is still cached by FileSystem.CACHE so there won't 
> be too many FileSystem instances.
> The FileSystem.CACHE caches the ViewFileSysem instance and the other 
> instances(the children filesystems) are cached in the inner cache.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-04 Thread GitBox
mukul1987 commented on a change in pull request #1364: HDDS-1843. Undetectable 
corruption after restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#discussion_r320922823
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
 ##
 @@ -240,14 +240,37 @@ public ContainerReportsProto getContainerReport() throws 
IOException {
   }
 
   /**
-   * Builds the missing container set by taking a diff total no containers
-   * actually found and number of containers which actually got created.
+   * Builds the missing container set by taking a diff between total no
+   * containers actually found and number of containers which actually
+   * got created. It also validates the BCSID stored in the snapshot file
+   * for each container as against what is reported in containerScan.
* This will only be called during the initialization of Datanode Service
* when  it still not a part of any write Pipeline.
-   * @param createdContainerSet ContainerId set persisted in the Ratis snapshot
+   * @param container2BCSIDMap Map of containerId to BCSID persisted in the
+   *   Ratis snapshot
*/
-  public void buildMissingContainerSet(Set createdContainerSet) {
-missingContainerSet.addAll(createdContainerSet);
-missingContainerSet.removeAll(containerMap.keySet());
+  public void buildMissingContainerSetAndValidate(
+  Map container2BCSIDMap) throws IOException {
 
 Review comment:
   Lets make this multi threaded, so that on restart, this state is reached a 
lot faster.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1373: HDDS-2053. Fix TestOzoneManagerRatisServer failure. Contributed by Xi…

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1373: HDDS-2053. Fix 
TestOzoneManagerRatisServer failure. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/1373#issuecomment-528039204
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 636 | trunk passed |
   | +1 | compile | 383 | trunk passed |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 910 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | trunk passed |
   | 0 | spotbugs | 415 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 611 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 539 | the patch passed |
   | +1 | compile | 386 | the patch passed |
   | +1 | javac | 386 | the patch passed |
   | +1 | checkstyle | 86 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 674 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | the patch passed |
   | +1 | findbugs | 629 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 280 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1797 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 7643 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1373/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1373 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 27561e21465d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 337e9b7 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1373/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1373/3/testReport/ |
   | Max. process+thread count | 5309 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1373/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1400: HDDS-2079. Fix TestSecureOzoneManager. Contributed by Xiaoyu Yao.

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1400: HDDS-2079. Fix TestSecureOzoneManager. 
Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1400#issuecomment-528033729
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 62 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 626 | trunk passed |
   | +1 | compile | 406 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 946 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 183 | trunk passed |
   | 0 | spotbugs | 432 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 638 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 566 | the patch passed |
   | +1 | compile | 397 | the patch passed |
   | +1 | javac | 397 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 663 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 183 | the patch passed |
   | +1 | findbugs | 667 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 272 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1771 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 7786 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.ozone.om.TestOzoneManagerRestart |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1400/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1400 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b702e5f581c8 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 337e9b7 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1400/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1400/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1400/1/testReport/ |
   | Max. process+thread count | 4820 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1400/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-09-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922752#comment-16922752
 ] 

Hudson commented on HADOOP-16268:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17224 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17224/])
HADOOP-16268. Allow StandbyException to be thrown as (xkrogen: rev 
337e9b794d3401748a86aa03a55ac61b0305d231)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestFairCallQueue.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueue.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java


> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16268.001.patch, HADOOP-16268.002.patch, 
> HADOOP-16268.003.patch, HADOOP-16268.004.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16542) Update commons-beanutils version

2019-09-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922751#comment-16922751
 ] 

Hadoop QA commented on HADOOP-16542:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 78m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 
47s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m 
45s{color} | {color:red} dist in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
74m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
1s{color} | {color:red} dist in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
34s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m 
13s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
43s{color} | {color:red} dist in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 59s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
1s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
57s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 36s{color} 
| {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}247m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestProtoBufRpcServerHandoff |
|   | hadoop.util.curator.TestChildReaper |
|   | hadoop.ha.TestActiveStandbyElectorRealZK |
|   | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16542 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979268/HADOOP-16542.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 3cbe4b3cb7f5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 337e9b7 |
| maven | 

[GitHub] [hadoop] hadoop-yetus commented on issue #1115: HADOOP-16207 testMR failures

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1115: HADOOP-16207 testMR failures
URL: https://github.com/apache/hadoop/pull/1115#issuecomment-528032122
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 83 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 13 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1263 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 40 | trunk passed |
   | +1 | shadedclient | 834 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | trunk passed |
   | 0 | spotbugs | 64 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 62 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 38 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 11 new 
+ 13 unchanged - 1 fixed = 24 total (was 14) |
   | +1 | mvnsite | 38 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 860 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | the patch passed |
   | +1 | findbugs | 67 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 86 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 3661 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1115 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux bdcb05848a13 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 337e9b7 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/16/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/16/testReport/ |
   | Max. process+thread count | 335 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1115/16/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error

2019-09-04 Thread Roger Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922745#comment-16922745
 ] 

Roger Liu commented on HADOOP-16543:


We actually have a PR that can address this issue: 
[https://github.com/apache/hadoop/pull/1399]

Doing this at the Hadoop level can fix the problem

> Cached DNS name resolution error
> 
>
> Key: HADOOP-16543
> URL: https://issues.apache.org/jira/browse/HADOOP-16543
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: Roger Liu
>Priority: Major
>
> In Kubernetes, the a node may go down and then come back later with a 
> different IP address. Yarn clients which are already running will be unable 
> to rediscover the node after it comes back up due to caching the original IP 
> address. This is problematic for cases such as Spark HA on Kubernetes, as the 
> node containing the resource manager may go down and come back up, meaning 
> existing node managers must then also be restarted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] keith-turner closed pull request #350: Fix FileSystem.listStatus javadoc

2019-09-04 Thread GitBox
keith-turner closed pull request #350: Fix FileSystem.listStatus javadoc
URL: https://github.com/apache/hadoop/pull/350
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #1402: HADOOP-16547. make sure that s3guard prune sets up the FS

2019-09-04 Thread GitBox
steveloughran opened a new pull request #1402: HADOOP-16547. make sure that 
s3guard prune sets up the FS
URL: https://github.com/apache/hadoop/pull/1402
 
 
   initial patch; not done the full testing yet
   
   Change-Id: Iaf71561cef6c797a3c66fed110faf08da6cac361


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1402: HADOOP-16547. make sure that s3guard prune sets up the FS

2019-09-04 Thread GitBox
steveloughran commented on a change in pull request #1402: HADOOP-16547. make 
sure that s3guard prune sets up the FS
URL: https://github.com/apache/hadoop/pull/1402#discussion_r320899753
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
 ##
 @@ -1112,6 +1138,7 @@ public int run(String[] args, PrintStream out) throws
   return SUCCESS;
 }
 
+
 
 Review comment:
   will cut 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-04 Thread GitBox
bshashikant commented on issue #1364: HDDS-1843. Undetectable corruption after 
restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#issuecomment-528017062
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-04 Thread GitBox
dineshchitlangia commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r320897464
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2647,4 +2648,89 @@ private void completeMultipartUpload(OzoneBucket 
bucket, String keyName,
 Assert.assertEquals(omMultipartUploadCompleteInfo.getKey(), keyName);
 Assert.assertNotNull(omMultipartUploadCompleteInfo.getHash());
   }
+
+  /**
+   * Tests GDPR encryption/decryption.
+   * 1. Create GDPR Enabled bucket.
+   * 2. Create a Key in this bucket so it gets encrypted via GDPRSymmetricKey.
+   * 3. Read key and validate the content/metadata is as expected because the
+   * readKey will decrypt using the GDPR Symmetric Key with details from 
KeyInfo
+   * Metadata.
+   * 4. To check encryption, we forcibly update KeyInfo Metadata and remove the
 
 Review comment:
   AFAIK, we do not have provide an option to end users for updating metadata.
   At best, we provide options to create, delete, info, list, add/remove ACL, 
get/set ACL.
   
   Also, this is a test class to simulate that encryption is working.
   
   That said, we provide option to enable gdpr for a bucket during creation in 
HDDS-2016. So, if a user does is not admin/owner, they cannot enable the gdpr 
flag.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1370: HDFS-14492. Snapshot memory leak. Contributed by Wei-Chiu Chuang.

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1370: HDFS-14492. Snapshot memory leak. 
Contributed by Wei-Chiu Chuang.
URL: https://github.com/apache/hadoop/pull/1370#issuecomment-528015618
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 56 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1120 | trunk passed |
   | +1 | compile | 62 | trunk passed |
   | +1 | checkstyle | 50 | trunk passed |
   | +1 | mvnsite | 72 | trunk passed |
   | +1 | shadedclient | 813 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 63 | trunk passed |
   | 0 | spotbugs | 206 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 204 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 73 | the patch passed |
   | +1 | compile | 66 | the patch passed |
   | +1 | javac | 66 | the patch passed |
   | +1 | checkstyle | 50 | the patch passed |
   | +1 | mvnsite | 67 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 875 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 60 | the patch passed |
   | +1 | findbugs | 209 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 5888 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 9844 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
   |   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | 
hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
   |   | hadoop.hdfs.TestDatanodeDeath |
   |   | hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.TestBackupNode |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   |   | hadoop.hdfs.TestAppendSnapshotTruncate |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.server.namenode.TestFileTruncate |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1370/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1370 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fb3485836809 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1ae7759 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1370/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1370/3/testReport/ |
   | Max. process+thread count | 4781 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1370/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1386: HDDS-2015. Encrypt/decrypt key using symmetric key while writing/reading

2019-09-04 Thread GitBox
dineshchitlangia commented on a change in pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r320893985
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -601,6 +605,16 @@ public OzoneOutputStream createKey(
 HddsClientUtils.verifyResourceName(volumeName, bucketName);
 HddsClientUtils.checkNotNull(keyName, type, factor);
 String requestId = UUID.randomUUID().toString();
+
+if(Boolean.valueOf(metadata.get(OzoneConsts.GDPR_FLAG))){
+  try{
+GDPRSymmetricKey gKey = new GDPRSymmetricKey();
+metadata.putAll(gKey.getKeyDetails());
+  }catch (Exception e) {
+throw new IOException(e);
 
 Review comment:
   @ajayydv The only time this line would create an exception is when the host 
does not have JCE policy jars installed as the default secret is 32 chars and 
without the JCE policy jars it would throw "java.security.InvalidKeyException: 
Illegal key size or default parameters". Since we are throwing the exception 
and not bypassing it without successful key generation, a debug statement won't 
add much value.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1369: HDDS-2020. Remove mTLS from Ozone GRPC. Contributed by Xiaoyu Yao.

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1369: HDDS-2020. Remove mTLS from Ozone GRPC. 
Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/1369#issuecomment-528010252
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 121 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 73 | Maven dependency ordering for branch |
   | +1 | mvninstall | 609 | trunk passed |
   | +1 | compile | 372 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 948 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | trunk passed |
   | 0 | spotbugs | 440 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 640 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | -1 | mvninstall | 42 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 28 | hadoop-ozone in the patch failed. |
   | -1 | compile | 28 | hadoop-hdds in the patch failed. |
   | -1 | compile | 22 | hadoop-ozone in the patch failed. |
   | -1 | cc | 28 | hadoop-hdds in the patch failed. |
   | -1 | cc | 22 | hadoop-ozone in the patch failed. |
   | -1 | javac | 28 | hadoop-hdds in the patch failed. |
   | -1 | javac | 22 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 21 | The patch fails to run checkstyle in hadoop-hdds |
   | -0 | checkstyle | 21 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 732 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 24 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 24 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 35 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 24 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 32 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 4285 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1369 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 1efaf416a890 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 337e9b7 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1369/out/maven-patch-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1369/out/maven-patch-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out/patch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1369/6/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 

[jira] [Created] (HADOOP-16548) ABFS: Config to enable/disable flush operation

2019-09-04 Thread Bilahari T H (Jira)
Bilahari T H created HADOOP-16548:
-

 Summary: ABFS: Config to enable/disable flush operation
 Key: HADOOP-16548
 URL: https://issues.apache.org/jira/browse/HADOOP-16548
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Reporter: Bilahari T H
Assignee: Bilahari T H


Make flush operation enabled/disabled through configuration. This is part of 
performance improvements for ABFS driver.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-04 Thread GitBox
bshashikant commented on a change in pull request #1364: HDDS-1843. 
Undetectable corruption after restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#discussion_r320889606
 
 

 ##
 File path: hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
 ##
 @@ -248,8 +248,13 @@ message ContainerDataProto {
   optional ContainerType containerType = 10 [default = KeyValueContainer];
 }
 
-message ContainerIdSetProto {
-repeated int64 containerId = 1;
+message Container2BCSIDMapEntryProto {
 
 Review comment:
   address in the next patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-04 Thread GitBox
bshashikant commented on a change in pull request #1364: HDDS-1843. 
Undetectable corruption after restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#discussion_r320888068
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java
 ##
 @@ -282,5 +282,4 @@ public void deleteBlock(Container container, BlockID 
blockID) throws
   public void shutdown() {
 BlockUtils.shutdownCache(ContainerCache.getInstance(config));
   }
-
 
 Review comment:
   address in the next patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-04 Thread GitBox
bshashikant commented on a change in pull request #1364: HDDS-1843. 
Undetectable corruption after restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#discussion_r320887895
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
 ##
 @@ -329,6 +336,21 @@ private ContainerCommandResponseProto dispatchRequest(
 }
   }
 
+  private void updateBCSID(Container container,
+  DispatcherContext dispatcherContext, ContainerProtos.Type cmdType) {
+long bcsID = container.getBlockCommitSequenceId();
+long containerId = container.getContainerData().getContainerID();
+Map container2BCSIDMap;
+if (dispatcherContext != null && (cmdType == ContainerProtos.Type.PutBlock
+|| cmdType == ContainerProtos.Type.PutSmallFile)) {
+  container2BCSIDMap = dispatcherContext.getContainer2BCSIDMap();
+  Preconditions.checkNotNull(container2BCSIDMap);
+  Preconditions.checkArgument(container2BCSIDMap.containsKey(containerId));
+  // updates the latest BCSID on every putBlock or putSmallFile
+  // transaction over Ratis.
+  container2BCSIDMap.computeIfPresent(containerId, (u, v) -> v = bcsID);
 
 Review comment:
   will address in the next patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-04 Thread GitBox
bshashikant commented on a change in pull request #1364: HDDS-1843. 
Undetectable corruption after restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#discussion_r320887705
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
 ##
 @@ -240,14 +240,37 @@ public ContainerReportsProto getContainerReport() throws 
IOException {
   }
 
   /**
-   * Builds the missing container set by taking a diff total no containers
-   * actually found and number of containers which actually got created.
+   * Builds the missing container set by taking a diff between total no
+   * containers actually found and number of containers which actually
+   * got created. It also validates the BCSID stored in the snapshot file
+   * for each container as against what is reported in containerScan.
* This will only be called during the initialization of Datanode Service
* when  it still not a part of any write Pipeline.
-   * @param createdContainerSet ContainerId set persisted in the Ratis snapshot
+   * @param container2BCSIDMap Map of containerId to BCSID persisted in the
+   *   Ratis snapshot
*/
-  public void buildMissingContainerSet(Set createdContainerSet) {
-missingContainerSet.addAll(createdContainerSet);
-missingContainerSet.removeAll(containerMap.keySet());
+  public void buildMissingContainerSetAndValidate(
+  Map container2BCSIDMap) throws IOException {
+for (Map.Entry mapEntry : container2BCSIDMap.entrySet()) {
+  long id = mapEntry.getKey();
+  if (!containerMap.containsKey(id)) {
+LOG.warn("Adding container {} to missing container set.", id);
+missingContainerSet.add(id);
+  } else {
+Container container = containerMap.get(id);
+long containerBCSID = container.getBlockCommitSequenceId();
+long snapshotBCSID = mapEntry.getValue();
+if (containerBCSID < snapshotBCSID) {
+  LOG.warn(
+  "Marking container {} unhealthy as reported BCSID {} is smaller"
 
 Review comment:
   will address in the next patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-04 Thread GitBox
bshashikant commented on a change in pull request #1364: HDDS-1843. 
Undetectable corruption after restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#discussion_r320887264
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -258,8 +257,9 @@ private long loadSnapshot(SingleFileSnapshotInfo snapshot)
* @throws IOException
*/
   public void persistContainerSet(OutputStream out) throws IOException {
 
 Review comment:
   will address it the next patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-04 Thread GitBox
bshashikant commented on a change in pull request #1364: HDDS-1843. 
Undetectable corruption after restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#discussion_r320886836
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
 ##
 @@ -329,6 +336,21 @@ private ContainerCommandResponseProto dispatchRequest(
 }
   }
 
+  private void updateBCSID(Container container,
+  DispatcherContext dispatcherContext, ContainerProtos.Type cmdType) {
+long bcsID = container.getBlockCommitSequenceId();
+long containerId = container.getContainerData().getContainerID();
+Map container2BCSIDMap;
+if (dispatcherContext != null && (cmdType == ContainerProtos.Type.PutBlock
 
 Review comment:
   For all the cmds, the dispatcher context is not setup and will be null. We 
need to check for specific cmd types to get the context.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1388: HADOOP-16255. Add ChecksumFs.rename(path, path, boolean) to rename crc file as well when FileContext.rename(path, path, options) is called.

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1388: HADOOP-16255. Add 
ChecksumFs.rename(path, path, boolean) to rename crc file as well when 
FileContext.rename(path, path, options) is called.
URL: https://github.com/apache/hadoop/pull/1388#issuecomment-528005443
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1237 | trunk passed |
   | +1 | compile | 1193 | trunk passed |
   | +1 | checkstyle | 48 | trunk passed |
   | +1 | mvnsite | 91 | trunk passed |
   | +1 | shadedclient | 948 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 70 | trunk passed |
   | 0 | spotbugs | 146 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 143 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 55 | the patch passed |
   | +1 | compile | 1173 | the patch passed |
   | +1 | javac | 1173 | the patch passed |
   | +1 | checkstyle | 49 | the patch passed |
   | +1 | mvnsite | 79 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 810 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 68 | the patch passed |
   | +1 | findbugs | 161 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 641 | hadoop-common in the patch passed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 6909 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1388/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1388 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 98f6ca19ebf6 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 337e9b7 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1388/6/testReport/ |
   | Max. process+thread count | 1327 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1388/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14951) KMSACL implementation is not configurable

2019-09-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922697#comment-16922697
 ] 

Hadoop QA commented on HADOOP-14951:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  8m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} root: The patch generated 0 new + 110 unchanged - 1 
fixed = 110 total (was 111) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
33s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
56s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots |
|   | hadoop.hdfs.tools.TestECAdmin |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion |
|   

[GitHub] [hadoop] hadoop-yetus commented on issue #1160: HADOOP-16458 LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1160: HADOOP-16458 
LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3
URL: https://github.com/apache/hadoop/pull/1160#issuecomment-527998693
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 82 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1104 | trunk passed |
   | +1 | compile | 1119 | trunk passed |
   | +1 | checkstyle | 166 | trunk passed |
   | +1 | mvnsite | 200 | trunk passed |
   | +1 | shadedclient | 1185 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 73 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 320 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 124 | the patch passed |
   | +1 | compile | 1008 | the patch passed |
   | +1 | javac | 1008 | the patch passed |
   | -0 | checkstyle | 148 | root: The patch generated 1 new + 231 unchanged - 
9 fixed = 232 total (was 240) |
   | +1 | mvnsite | 179 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 693 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 139 | the patch passed |
   | +1 | findbugs | 314 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 555 | hadoop-common in the patch passed. |
   | +1 | unit | 339 | hadoop-mapreduce-client-core in the patch passed. |
   | +1 | unit | 92 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 7972 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1160 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0bb11b0954b6 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1ae7759 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/15/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/15/testReport/ |
   | Max. process+thread count | 1480 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-tools/hadoop-aws U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/15/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1401: HDDS-1561: Mark OPEN containers as QUASI_CLOSED as part of Ratis groupRemove

2019-09-04 Thread GitBox
nandakumar131 commented on a change in pull request #1401: HDDS-1561: Mark OPEN 
containers as QUASI_CLOSED as part of Ratis groupRemove
URL: https://github.com/apache/hadoop/pull/1401#discussion_r320872891
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
 ##
 @@ -111,10 +111,13 @@ public void handle(SCMCommand command, OzoneContainer 
ozoneContainer,
 return;
   }
   // If we reach here, there is no active pipeline for this container.
-  if (!closeCommand.getForce()) {
-// QUASI_CLOSE the container.
-controller.quasiCloseContainer(containerId);
-  } else {
+  if (container.getContainerState() == ContainerProtos.ContainerDataProto
+  .State.OPEN || container.getContainerState() ==
+  ContainerProtos.ContainerDataProto.State.CLOSING) {
+// Container should not exist in OPEN or CLOSING state without a
+// pipeline.
+controller.markContainerUnhealthy(containerId);
+  } else if (closeCommand.getForce()) {
 // SCM told us to force close the container.
 controller.closeContainer(containerId);
   }
 
 Review comment:
   Not exactly related to this patch, but this part of code has become a little 
bit messy.
   We should be able to refactor this.
   ``` 
   switch (container.getContainerState()) {
case OPEN:
  controller.markContainerForClose(containerId);
case CLOSING:
  final HddsProtos.PipelineID pipelineID = closeCommand.getPipelineID();
  final XceiverServerSpi writeChannel = ozoneContainer.getWriteChannel();
  if (writeChannel.isExist(pipelineID)) {
writeChannel.submitRequest(getContainerCommandRequestProto(
datanodeDetails, containerId), pipelineID);
  } else {
controller.markContainerUnhealthy(containerId);
  }
  break;
case QUASI_CLOSED:
  if (closeCommand.getForce()) {
controller.closeContainer(containerId);
break;
  }
case CLOSED:
case UNHEALTHY:
case INVALID:
  LOG.debug("Cannot close the container #{}, the container is" +
  " in {} state.", containerId, container.getContainerState());
}
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1401: HDDS-1561: Mark OPEN containers as QUASI_CLOSED as part of Ratis groupRemove

2019-09-04 Thread GitBox
nandakumar131 commented on a change in pull request #1401: HDDS-1561: Mark OPEN 
containers as QUASI_CLOSED as part of Ratis groupRemove
URL: https://github.com/apache/hadoop/pull/1401#discussion_r320873889
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -800,6 +805,24 @@ public void notifyLogFailed(Throwable t, LogEntryProto 
failedEntry) {
 return future;
   }
 
+  @Override
+  public void notifyGroupRemove() {
+ratisServer.notifyGroupRemove(gid);
+// Make best effort to quasi-close all the containers on group removal.
+// Containers already in terminal state like CLOSED or UNHEALTHY will not
+// be affected.
+for (Long cid : createContainerSet) {
+  try {
+containerController.markContainerForClose(cid);
+  } catch (IOException e) {
+  }
+  try {
+containerController.quasiCloseContainer(cid);
+  } catch (IOException e) {
+  }
+}
+  }
+
 
 Review comment:
   If markContainerForClose fails, quasiCloseContainer will definitely fail. We 
can put both of the calls into same try catch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1373: HDDS-2053. Fix TestOzoneManagerRatisServer failure. Contributed by Xi…

2019-09-04 Thread GitBox
xiaoyuyao commented on issue #1373: HDDS-2053. Fix TestOzoneManagerRatisServer 
failure. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/1373#issuecomment-527995072
 
 
   In the test Run/Debug Configuration, Choose "Repeat": "Until Failure" for 
the test case
   TestOzoneManagerRatisServer#verifyRaftGroupIdGenerationWithCustomOmServiceId


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1326: HDDS-1898. GrpcReplicationService#download cannot replicate the container.

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1326: HDDS-1898. 
GrpcReplicationService#download cannot replicate the container.
URL: https://github.com/apache/hadoop/pull/1326#issuecomment-527987953
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 673 | trunk passed |
   | +1 | compile | 389 | trunk passed |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 899 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 191 | trunk passed |
   | 0 | spotbugs | 952 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 1236 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for patch |
   | +1 | mvninstall | 1077 | the patch passed |
   | +1 | compile | 564 | the patch passed |
   | +1 | javac | 564 | the patch passed |
   | +1 | checkstyle | 88 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 796 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | +1 | findbugs | 667 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 279 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1855 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 9558 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1326/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1326 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 91a8e13a5493 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1ae7759 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1326/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1326/5/testReport/ |
   | Max. process+thread count | 5200 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1326/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1359: HADOOP-16430.S3AFilesystem.delete to 
incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359#issuecomment-527983678
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 91 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 18 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 75 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1363 | trunk passed |
   | +1 | compile | 1329 | trunk passed |
   | +1 | checkstyle | 165 | trunk passed |
   | +1 | mvnsite | 127 | trunk passed |
   | +1 | shadedclient | 1126 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 118 | trunk passed |
   | 0 | spotbugs | 81 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 225 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 95 | the patch passed |
   | +1 | compile | 1222 | the patch passed |
   | +1 | javac | 1222 | the patch passed |
   | -0 | checkstyle | 179 | root: The patch generated 2 new + 93 unchanged - 5 
fixed = 95 total (was 98) |
   | +1 | mvnsite | 126 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 770 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 30 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | +1 | findbugs | 206 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 562 | hadoop-common in the patch passed. |
   | +1 | unit | 98 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 8059 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1359 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux be7f78b51008 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1ae7759 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/6/artifact/out/diff-checkstyle-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/6/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/6/testReport/ |
   | Max. process+thread count | 1343 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 opened a new pull request #1401: HDDS-1561: Mark OPEN containers as QUASI_CLOSED as part of Ratis groupRemove

2019-09-04 Thread GitBox
lokeshj1703 opened a new pull request #1401: HDDS-1561: Mark OPEN containers as 
QUASI_CLOSED as part of Ratis groupRemove
URL: https://github.com/apache/hadoop/pull/1401
 
 
   Right now, if a pipeline is destroyed by SCM, all the container on the 
pipeline are marked as quasi closed when datanode received close container 
command. SCM while processing these containers reports, marks these containers 
as closed once majority of the nodes are available.
   
   This is however not a sufficient condition in cases where the raft log 
directory is missing or corrupted. As the containers will not have all the 
applied transaction. 
   To solve this problem, we should QUASI_CLOSE the containers in datanode as 
part of ratis groupRemove. If a container is in OPEN state in datanode without 
any active pipeline, it will be marked as Unhealthy while processing close 
container command.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922637#comment-16922637
 ] 

Steve Loughran commented on HADOOP-16547:
-

{code}
 hadoop s3guard prune -days 7 -hours 4 -minutes 0 -seconds 1 s3a://landsat-pds/ 
java.nio.file.AccessDeniedException: spark-sql-102039-j8n: 
org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials 
provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider 
EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : 
com.amazonaws.SdkClientException: Unable to load AWS credentials from 
environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY 
(or AWS_SECRET_ACCESS_KEY))
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:200)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1811)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:520)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initMetadataStore(S3GuardTool.java:317)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Prune.run(S3GuardTool.java:1071)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:401)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1672)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1681)
Caused by: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS 
Credentials provided by TemporaryAWSCredentialsProvider 
SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider 
IAMInstanceCredentialsProvider : com.amazonaws.SdkClientException: Unable to 
load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or 
AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
at 
org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:216)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1225)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:801)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:751)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:4279)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:4246)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:1905)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1871)
at 
com.amazonaws.services.dynamodbv2.document.Table.describe(Table.java:137)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1746)
... 7 more
Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials 
from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and 
AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
at 
com.amazonaws.auth.EnvironmentVariableCredentialsProvider.getCredentials(EnvironmentVariableCredentialsProvider.java:50)
at 
org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:177)
... 22 more
{code}


> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other commands to make 

[jira] [Commented] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-09-04 Thread CR Hota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922636#comment-16922636
 ] 

CR Hota commented on HADOOP-16268:
--

[~xkrogen] Hey, no problem at all. Thanks for the nice review and commit. :)

> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16268.001.patch, HADOOP-16268.002.patch, 
> HADOOP-16268.003.patch, HADOOP-16268.004.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16547:
---

 Summary: s3guard prune command doesn't get AWS auth chain from FS
 Key: HADOOP-16547
 URL: https://issues.apache.org/jira/browse/HADOOP-16547
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


s3guard prune command doesn't get AWS auth chain from any FS, so it just drives 
the DDB store from the conf settings. If S3A is set up to use Delegation tokens 
then the DTs/custom AWS auth sequence is not picked up, so you get an auth 
failure.

Fix:

# instantiate the FS before calling initMetadataStore
# review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-04 Thread GitBox
nandakumar131 commented on a change in pull request #1364: HDDS-1843. 
Undetectable corruption after restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#discussion_r320841743
 
 

 ##
 File path: hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
 ##
 @@ -248,8 +248,13 @@ message ContainerDataProto {
   optional ContainerType containerType = 10 [default = KeyValueContainer];
 }
 
-message ContainerIdSetProto {
-repeated int64 containerId = 1;
+message Container2BCSIDMapEntryProto {
 
 Review comment:
   Never used, can be removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1364: HDDS-1843. Undetectable corruption after restart of a datanode.

2019-09-04 Thread GitBox
nandakumar131 commented on a change in pull request #1364: HDDS-1843. 
Undetectable corruption after restart of a datanode.
URL: https://github.com/apache/hadoop/pull/1364#discussion_r320842136
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
 ##
 @@ -329,6 +336,21 @@ private ContainerCommandResponseProto dispatchRequest(
 }
   }
 
+  private void updateBCSID(Container container,
+  DispatcherContext dispatcherContext, ContainerProtos.Type cmdType) {
+long bcsID = container.getBlockCommitSequenceId();
+long containerId = container.getContainerData().getContainerID();
+Map container2BCSIDMap;
+if (dispatcherContext != null && (cmdType == ContainerProtos.Type.PutBlock
+|| cmdType == ContainerProtos.Type.PutSmallFile)) {
+  container2BCSIDMap = dispatcherContext.getContainer2BCSIDMap();
+  Preconditions.checkNotNull(container2BCSIDMap);
+  Preconditions.checkArgument(container2BCSIDMap.containsKey(containerId));
+  // updates the latest BCSID on every putBlock or putSmallFile
+  // transaction over Ratis.
+  container2BCSIDMap.computeIfPresent(containerId, (u, v) -> v = bcsID);
 
 Review comment:
   `computeIfPresent` is not needed here, can be replaced with `Map#put`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16512) [hadoop-tools] Fix order of actual and expected expression in assert statements

2019-09-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922608#comment-16922608
 ] 

Hadoop QA commented on HADOOP-16512:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-tools: The patch generated 4 new + 277 
unchanged - 3 fixed = 281 total (was 280) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
14s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-archives in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-kafka in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16512 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979367/HADOOP-16512.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 0ef2f52212c3 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool 

[jira] [Commented] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-09-04 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922600#comment-16922600
 ] 

Erik Krogen commented on HADOOP-16268:
--

[~crh] sorry for the delay, I just returned from vacation. I just committed 
this to trunk. Thanks for the contribution!

> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16268.001.patch, HADOOP-16268.002.patch, 
> HADOOP-16268.003.patch, HADOOP-16268.004.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-09-04 Thread Erik Krogen (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16268:
-
Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16268.001.patch, HADOOP-16268.002.patch, 
> HADOOP-16268.003.patch, HADOOP-16268.004.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15726) Create utility to limit frequency of log statements

2019-09-04 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922595#comment-16922595
 ] 

Erik Krogen commented on HADOOP-15726:
--

Hi [~zhangchen], if I recall correctly, the read locks weren't done in this 
patch because the implementation of {{LogThrottlingHelper}} is not thread-safe, 
and the read lock variables are modified in a concurrent fashion. If you want 
to make enhancements to support read locks, I would be happy to review.

> Create utility to limit frequency of log statements
> ---
>
> Key: HADOOP-15726
> URL: https://issues.apache.org/jira/browse/HADOOP-15726
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, util
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15726.000.patch, HADOOP-15726.001.patch, 
> HADOOP-15726.002.patch, HADOOP-15726.003.patch, 
> HDFS-15726-branch-3.0.003.patch
>
>
> There is a common pattern of logging a behavior that is normally extraneous. 
> Under some circumstances, such a behavior becomes common, flooding the logs 
> and making it difficult to see what else is going on in the system. Under 
> such situations it is beneficial to limit how frequently the extraneous 
> behavior is logged, while capturing some summary information about the 
> suppressed log statements.
> This is currently implemented in {{FSNamesystemLock}} (in HDFS-10713). We 
> have additional use cases for this in HDFS-13791, so this is a good time to 
> create a common utility for different sites to share this logic.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1302: HADOOP-16138. hadoop fs mkdir / of nonexistent abfs container raises NPE

2019-09-04 Thread GitBox
hadoop-yetus commented on issue #1302: HADOOP-16138. hadoop fs mkdir / of 
nonexistent abfs container raises NPE
URL: https://github.com/apache/hadoop/pull/1302#issuecomment-527944423
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 54 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1149 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 35 | trunk passed |
   | +1 | shadedclient | 731 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | trunk passed |
   | 0 | spotbugs | 52 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 50 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 23 | the patch passed |
   | +1 | javac | 23 | the patch passed |
   | +1 | checkstyle | 17 | the patch passed |
   | +1 | mvnsite | 28 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 883 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | the patch passed |
   | +1 | findbugs | 58 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 82 | hadoop-azure in the patch passed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3373 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1302/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1302 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 332bd5cb35fd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1ae7759 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1302/8/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1302/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13836) Securing Hadoop RPC using SSL

2019-09-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922569#comment-16922569
 ] 

Hadoop QA commented on HADOOP-13836:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-13836 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13836 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12848944/HADOOP-13836-v4.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16515/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Securing Hadoop RPC using SSL
> -
>
> Key: HADOOP-13836
> URL: https://issues.apache.org/jira/browse/HADOOP-13836
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: kartheek muthyala
>Assignee: kartheek muthyala
>Priority: Major
> Attachments: HADOOP-13836-v2.patch, HADOOP-13836-v3.patch, 
> HADOOP-13836-v4.patch, HADOOP-13836.patch, Secure IPC OSS Proposal-1.pdf, 
> SecureIPC Performance Analysis-OSS.pdf
>
>
> Today, RPC connections in Hadoop are encrypted using Simple Authentication & 
> Security Layer (SASL), with the Kerberos ticket based authentication or 
> Digest-md5 checksum based authentication protocols. This proposal is about 
> enhancing this cipher suite with SSL/TLS based encryption and authentication. 
> SSL/TLS is a proposed Internet Engineering Task Force (IETF) standard, that 
> provides data security and integrity across two different end points in a 
> network. This protocol has made its way to a number of applications such as 
> web browsing, email, internet faxing, messaging, VOIP etc. And supporting 
> this cipher suite at the core of Hadoop would give a good synergy with the 
> applications on top and also bolster industry adoption of Hadoop.
> The Server and Client code in Hadoop IPC should support the following modes 
> of communication
> 1.Plain 
> 2. SASL encryption with an underlying authentication
> 3. SSL based encryption and authentication (x509 certificate)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16534) Exclude submarine from hadoop source build

2019-09-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922521#comment-16922521
 ] 

Hudson commented on HADOOP-16534:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17223 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17223/])
HADOOP-16534. Exclude submarine from hadoop source build. (#1356) (github: rev 
ac5a0ae6d0de6cf08040e2c1a95d9c6657fcf17a)
* (edit) pom.xml
* (edit) hadoop-assemblies/src/main/resources/assemblies/hadoop-src.xml


> Exclude submarine from hadoop source build
> --
>
> Key: HADOOP-16534
> URL: https://issues.apache.org/jira/browse/HADOOP-16534
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16534.000.patch
>
>
> When we do source package of hadoop, it should not contain submarine 
> project/code.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error

2019-09-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922461#comment-16922461
 ] 

Steve Loughran commented on HADOOP-16543:
-

see https://docs.oracle.com/javase/8/docs/technotes/guides/net/properties.html

> Cached DNS name resolution error
> 
>
> Key: HADOOP-16543
> URL: https://issues.apache.org/jira/browse/HADOOP-16543
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: Roger Liu
>Priority: Major
>
> In Kubernetes, the a node may go down and then come back later with a 
> different IP address. Yarn clients which are already running will be unable 
> to rediscover the node after it comes back up due to caching the original IP 
> address. This is problematic for cases such as Spark HA on Kubernetes, as the 
> node containing the resource manager may go down and come back up, meaning 
> existing node managers must then also be restarted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16543) Cached DNS name resolution error

2019-09-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922460#comment-16922460
 ] 

Steve Loughran commented on HADOOP-16543:
-

# Java likes this caching and does it a lot internally. Its been known to cache 
negative DNS entries in the past too, which was always a nightmare
# a lot of this caching is going to be in layers (httpclient, gRpc) beneath the 
hadoop code

For the specific case of hadoop's own services, 
* they should think about using a registry service (hadoop registry, etcd, ..) 
to find things on failure rather than just spin, though changing hostnames 
complicates kerberos in ways I fear.
* There are probably lots of places we haven't discovered which need fixing.

I propose
* you explore changing the Java DNS TTL to see what difference that makes.
* after doing that, if there are places in a deep CodeBase where we're caching 
DNS entries, we can worry about fixing that.
* if they are in dependent libraries, it'll have to span projects.
* if it's a matter of documentation a new document could be started covering 
the challenge of deploying hadoop applications in this world. 
* target the trunk branch for fixes; backporting can follow

I'm supportive of this effort, just avoiding committing anything except what I 
can do to review your work. Be advised, I'm never happy going near the IPC code 
myself, so reviews from others will be needed there.

> Cached DNS name resolution error
> 
>
> Key: HADOOP-16543
> URL: https://issues.apache.org/jira/browse/HADOOP-16543
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: Roger Liu
>Priority: Major
>
> In Kubernetes, the a node may go down and then come back later with a 
> different IP address. Yarn clients which are already running will be unable 
> to rediscover the node after it comes back up due to caching the original IP 
> address. This is problematic for cases such as Spark HA on Kubernetes, as the 
> node containing the resource manager may go down and come back up, meaning 
> existing node managers must then also be restarted.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16546) make sure staging committers collect DTs for the staging FS

2019-09-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16546:
---

 Summary: make sure staging committers collect DTs for the staging 
FS
 Key: HADOOP-16546
 URL: https://issues.apache.org/jira/browse/HADOOP-16546
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran


This is not a problem I've seen in the wild, but I've now encountered a problem 
with hive doing something like this

we need to (somehow) make sure that the staging committers collect DTs for the 
staging dir FS. If this is the default FS or the same as a source or dest FS, 
this is handled elsewhere, but otherwise we need to add the staging fs.

I don;t see an easy way to do this, but we could add a new method to 
PathOutputCommitter to collect DTs; FileOutputFormat can invoke this alongside 
its ongoing collection of tokens for the output FS. Base impl would be a no-op, 
obviously.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on issue #1361: HDDS-1553. Add metrics in rack aware container placement policy.

2019-09-04 Thread GitBox
ChenSammi commented on issue #1361: HDDS-1553. Add metrics in rack aware 
container placement policy.
URL: https://github.com/apache/hadoop/pull/1361#issuecomment-527876085
 
 
   Thanks Xiaoyu for the comments. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 closed pull request #1396: HDDS-2077. Add maven-gpg-plugin.version to pom.ozone.xml.

2019-09-04 Thread GitBox
nandakumar131 closed pull request #1396: HDDS-2077. Add 
maven-gpg-plugin.version to pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1396
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-04 Thread GitBox
steveloughran commented on issue #1359: HADOOP-16430.S3AFilesystem.delete to 
incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359#issuecomment-527832823
 
 
   Some checkstyles; will fix
   
   ```
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java:110:
  S3AFileStatus(Path path,:3: More than 7 parameters (found 9). 
[ParameterNumber]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AMetadataPersistenceException.java:116:
() -> { outputStream.close(); });:23: '{' at column 23 should 
have line break after. [LeftCurly]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java:536:
Path root = path("testInconsistentS3ClientDeletes-" + 
DEFAULT_DELAY_KEY_SUBSTRING);: Line is longer than 80 characters (found 87). 
[LineLength]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on issue #1396: HDDS-2077. Add maven-gpg-plugin.version to pom.ozone.xml.

2019-09-04 Thread GitBox
nandakumar131 commented on issue #1396: HDDS-2077. Add maven-gpg-plugin.version 
to pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1396#issuecomment-527831651
 
 
   Test failures are not related. I will commit this shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-04 Thread GitBox
hadoop-yetus removed a comment on issue #1359: 
HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359#issuecomment-526633786
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 18 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 67 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1070 | trunk passed |
   | +1 | compile | 1039 | trunk passed |
   | +1 | checkstyle | 141 | trunk passed |
   | +1 | mvnsite | 125 | trunk passed |
   | +1 | shadedclient | 977 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 105 | trunk passed |
   | 0 | spotbugs | 66 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 188 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 81 | the patch passed |
   | +1 | compile | 966 | the patch passed |
   | +1 | javac | 966 | the patch passed |
   | -0 | checkstyle | 139 | root: The patch generated 3 new + 93 unchanged - 5 
fixed = 96 total (was 98) |
   | +1 | mvnsite | 121 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 650 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 34 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | +1 | findbugs | 203 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 568 | hadoop-common in the patch passed. |
   | +1 | unit | 88 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 6772 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1359 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b829024a92bb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c929b38 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/3/artifact/out/diff-checkstyle-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/3/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/3/testReport/ |
   | Max. process+thread count | 1397 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions

2019-09-04 Thread GitBox
hadoop-yetus removed a comment on issue #1359: 
HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
URL: https://github.com/apache/hadoop/pull/1359#issuecomment-525603355
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 13 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1296 | trunk passed |
   | +1 | compile | 1156 | trunk passed |
   | +1 | checkstyle | 159 | trunk passed |
   | +1 | mvnsite | 139 | trunk passed |
   | +1 | shadedclient | 1137 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 96 | trunk passed |
   | 0 | spotbugs | 64 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 184 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 85 | the patch passed |
   | +1 | compile | 1087 | the patch passed |
   | +1 | javac | 1087 | the patch passed |
   | -0 | checkstyle | 148 | root: The patch generated 9 new + 63 unchanged - 1 
fixed = 72 total (was 64) |
   | +1 | mvnsite | 117 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 747 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 31 | hadoop-tools_hadoop-aws generated 2 new + 1 unchanged 
- 0 fixed = 3 total (was 1) |
   | +1 | findbugs | 224 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 561 | hadoop-common in the patch failed. |
   | +1 | unit | 73 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 7458 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus |
   |   | hadoop.fs.contract.rawlocal.TestRawlocalContractGetFileStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1359 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 87a0fb805e4e 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b1eee8b |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/1/artifact/out/diff-checkstyle-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/1/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/1/testReport/ |
   | Max. process+thread count | 1468 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-04 Thread GitBox
hadoop-yetus removed a comment on issue #1229: HADOOP-16490. Improve S3Guard 
handling of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1229#issuecomment-525242672
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1282 | trunk passed |
   | +1 | compile | 1196 | trunk passed |
   | +1 | checkstyle | 158 | trunk passed |
   | +1 | mvnsite | 129 | trunk passed |
   | +1 | shadedclient | 1076 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 92 | trunk passed |
   | 0 | spotbugs | 65 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 181 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 83 | the patch passed |
   | +1 | compile | 1043 | the patch passed |
   | +1 | javac | 1043 | the patch passed |
   | +1 | checkstyle | 159 | root: The patch generated 0 new + 97 unchanged - 2 
fixed = 97 total (was 99) |
   | +1 | mvnsite | 132 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 718 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 110 | the patch passed |
   | +1 | findbugs | 204 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 583 | hadoop-common in the patch passed. |
   | +1 | unit | 73 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 7330 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1229 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 30aa52bd4dd8 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3329257 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/17/testReport/ |
   | Max. process+thread count | 1439 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/17/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-09-04 Thread GitBox
hadoop-yetus removed a comment on issue #1229: HADOOP-16490. Improve S3Guard 
handling of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1229#issuecomment-520141965
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1067 | trunk passed |
   | +1 | compile | 1045 | trunk passed |
   | +1 | checkstyle | 147 | trunk passed |
   | +1 | mvnsite | 124 | trunk passed |
   | +1 | shadedclient | 1011 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 105 | trunk passed |
   | 0 | spotbugs | 68 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 181 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 77 | the patch passed |
   | +1 | compile | 1014 | the patch passed |
   | +1 | javac | 1014 | the patch passed |
   | +1 | checkstyle | 141 | root: The patch generated 0 new + 46 unchanged - 2 
fixed = 46 total (was 48) |
   | +1 | mvnsite | 124 | the patch passed |
   | -1 | whitespace | 0 | The patch has 2 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 670 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 106 | the patch passed |
   | +1 | findbugs | 199 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 479 | hadoop-common in the patch failed. |
   | +1 | unit | 77 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6781 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.shell.TestCopy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1229 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 0eeb6b02f19c 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fba222a |
   | Default Java | 1.8.0_222 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/9/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/9/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/9/testReport/ |
   | Max. process+thread count | 1463 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >