[GitHub] [hadoop] hadoop-yetus commented on issue #903: HDDS-1490. Support configurable container placement policy through 'o…

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #903: HDDS-1490. Support configurable container 
placement policy through 'o…
URL: https://github.com/apache/hadoop/pull/903#issuecomment-499355283
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 492 | trunk passed |
   | +1 | compile | 293 | trunk passed |
   | +1 | checkstyle | 88 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 875 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   | 0 | spotbugs | 334 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 521 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 441 | the patch passed |
   | +1 | compile | 274 | the patch passed |
   | +1 | javac | 274 | the patch passed |
   | +1 | checkstyle | 64 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 5 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 626 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 523 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 233 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1357 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6380 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-903/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/903 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 20e99d05d7c3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 73954c1 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-903/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-903/2/testReport/ |
   | Max. process+thread count | 4953 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-903/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16117) Upgrade to latest AWS SDK

2019-06-05 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16857270#comment-16857270
 ] 

Aaron Fabbri commented on HADOOP-16117:
---

+1 LGTM for trunk. Agreed extended testing is to be expected as AWS SDK has a 
history of subtle bugs

> Upgrade to latest AWS SDK
> -
>
> Key: HADOOP-16117
> URL: https://issues.apache.org/jira/browse/HADOOP-16117
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Upgrade to the most recent AWS SDK. That's 1.11; even though there's a 2.0 
> out it'll be more significant an upgrade, with impact downstream.
> The new [AWS SDK update 
> process|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md#-qualifying-an-aws-sdk-update]
>  *must* be followed, and we should plan for 1-2 surprises afterwards anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-05 Thread Greg Senia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-16350:

Fix Version/s: 3.3.0
   Status: Patch Available  (was: Open)

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.2, 2.7.6, 3.0.0, 2.8.3
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 556079 for gss2002 on ha-hdfs:tech
> 19/05/29 14:06:10 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: java.net.NoRouteToHostException: No route to host (Host 
> unreachable)
> at 
> 

[jira] [Updated] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-05 Thread Greg Senia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-16350:

Attachment: HADOOP-16350.patch

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 556079 for gss2002 on ha-hdfs:tech
> 19/05/29 14:06:10 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: java.net.NoRouteToHostException: No route to host (Host 
> unreachable)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1029)
> at 
> 

[GitHub] [hadoop] ChenSammi commented on issue #903: HDDS-1490. Support configurable container placement policy through 'o…

2019-06-05 Thread GitBox
ChenSammi commented on issue #903: HDDS-1490. Support configurable container 
placement policy through 'o…
URL: https://github.com/apache/hadoop/pull/903#issuecomment-499336056
 
 
   @xiaoyuyao, thanks for the information. I didn't releasize that it's because 
I explicitely added the testResouces to the pom.xml, then the resources under 
${basedir}/src/test/resources get ignored.  I will upload a new commit shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao merged pull request #910: HDDS-1612. Add 'scmcli printTopology' shell command to print datanode…

2019-06-05 Thread GitBox
xiaoyuyao merged pull request #910: HDDS-1612. Add 'scmcli printTopology' shell 
command to print datanode…
URL: https://github.com/apache/hadoop/pull/910
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #910: HDDS-1612. Add 'scmcli printTopology' shell command to print datanode…

2019-06-05 Thread GitBox
xiaoyuyao commented on a change in pull request #910: HDDS-1612. Add 'scmcli 
printTopology' shell command to print datanode…
URL: https://github.com/apache/hadoop/pull/910#discussion_r291007246
 
 

 ##
 File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
 ##
 @@ -0,0 +1,80 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.cli;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.client.ScmClient;
+import picocli.CommandLine;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONED;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONING;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+/**
+ * Handler of printTopology command.
+ */
+@CommandLine.Command(
+name = "printTopology",
+description = "Print a tree of the network topology as reported by SCM",
+mixinStandardHelpOptions = true,
+versionProvider = HddsVersionProvider.class)
+public class TopologySubcommand implements Callable {
+
+  @CommandLine.ParentCommand
+  private SCMCLI parent;
+
+  private static List stateArray = new ArrayList<>();
+
+  static {
+stateArray.add(HEALTHY);
+stateArray.add(STALE);
+stateArray.add(DEAD);
+stateArray.add(DECOMMISSIONING);
+stateArray.add(DECOMMISSIONED);
+  }
+
+  @Override
+  public Void call() throws Exception {
+try (ScmClient scmClient = parent.createScmClient()) {
+  for (HddsProtos.NodeState state : stateArray) {
+List nodes = scmClient.queryNode(state,
+HddsProtos.QueryScope.CLUSTER, "");
+if (nodes != null && nodes.size() > 0) {
+  // show node state
+  System.out.println("State = " + state.toString());
 
 Review comment:
   Agree, let's add that in follow up JIRA. I will merge it shortly. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #818: HADOOP-16117 Update AWS SDK to 1.11.563

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #818: HADOOP-16117 Update AWS SDK to 1.11.563
URL: https://github.com/apache/hadoop/pull/818#issuecomment-499326011
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1042 | trunk passed |
   | +1 | compile | 1014 | trunk passed |
   | +1 | mvnsite | 844 | trunk passed |
   | +1 | shadedclient | 3636 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 342 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 1073 | the patch passed |
   | +1 | compile | 965 | the patch passed |
   | +1 | javac | 965 | the patch passed |
   | +1 | mvnsite | 828 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 668 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 379 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 8185 | root in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 16342 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestMultipleNNPortQOP |
   |   | hadoop.hdfs.web.TestWebHdfsTimeouts |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-818/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/818 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 69a492540875 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3b1c257 |
   | Default Java | 1.8.0_212 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-818/4/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-818/4/artifact/out/patch-unit-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-818/4/testReport/ |
   | Max. process+thread count | 4427 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-tools/hadoop-aws . U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-818/4/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on a change in pull request #910: HDDS-1612. Add 'scmcli printTopology' shell command to print datanode…

2019-06-05 Thread GitBox
ChenSammi commented on a change in pull request #910: HDDS-1612. Add 'scmcli 
printTopology' shell command to print datanode…
URL: https://github.com/apache/hadoop/pull/910#discussion_r290995013
 
 

 ##
 File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
 ##
 @@ -0,0 +1,80 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.cli;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.client.ScmClient;
+import picocli.CommandLine;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONED;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONING;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+/**
+ * Handler of printTopology command.
+ */
+@CommandLine.Command(
+name = "printTopology",
+description = "Print a tree of the network topology as reported by SCM",
+mixinStandardHelpOptions = true,
+versionProvider = HddsVersionProvider.class)
+public class TopologySubcommand implements Callable {
+
+  @CommandLine.ParentCommand
+  private SCMCLI parent;
+
+  private static List stateArray = new ArrayList<>();
+
+  static {
+stateArray.add(HEALTHY);
+stateArray.add(STALE);
+stateArray.add(DEAD);
+stateArray.add(DECOMMISSIONING);
+stateArray.add(DECOMMISSIONED);
+  }
+
+  @Override
+  public Void call() throws Exception {
+try (ScmClient scmClient = parent.createScmClient()) {
+  for (HddsProtos.NodeState state : stateArray) {
+List nodes = scmClient.queryNode(state,
+HddsProtos.QueryScope.CLUSTER, "");
+if (nodes != null && nodes.size() > 0) {
+  // show node state
+  System.out.println("State = " + state.toString());
 
 Review comment:
   @xiaoyuyao , we can add option to order the output acccording to topology 
layer. For example, for /rack/node topolgy, we can show, 
   State = HEALTHY
   /default-rack: 
   ozone_datanode_1.ozone_default/172.18.0.3   
   ozone_datanode_2.ozone_default/172.18.0.2   
   ozone_datanode_3.ozone_default/172.18.0.4   
   /rack1:
   ozone_datanode_4.ozone_default/172.18.0.5   
   ozone_datanode_5.ozone_default/172.18.0.6   
   For /dc/rack/node topology, we can either show 
   State = HEALTHY
   /default-dc/default-rack: 
   ozone_datanode_1.ozone_default/172.18.0.3   
   ozone_datanode_2.ozone_default/172.18.0.2   
   ozone_datanode_3.ozone_default/172.18.0.4   
   /dc1/rack1:
   ozone_datanode_4.ozone_default/172.18.0.5   
   ozone_datanode_5.ozone_default/172.18.0.6   
   
   or 
   
   State = HEALTHY
   default-dc: 
 default-rack: 
   ozone_datanode_1.ozone_default/172.18.0.3   
   ozone_datanode_2.ozone_default/172.18.0.2   
   ozone_datanode_3.ozone_default/172.18.0.4   
   dc1:
 rack1:
   ozone_datanode_4.ozone_default/172.18.0.5   
   ozone_datanode_5.ozone_default/172.18.0.6   
   
   This can be done in a follow on JIRA. Currently, this plain format just 
meets our needs. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on a change in pull request #910: HDDS-1612. Add 'scmcli printTopology' shell command to print datanode…

2019-06-05 Thread GitBox
ChenSammi commented on a change in pull request #910: HDDS-1612. Add 'scmcli 
printTopology' shell command to print datanode…
URL: https://github.com/apache/hadoop/pull/910#discussion_r290995013
 
 

 ##
 File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
 ##
 @@ -0,0 +1,80 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.cli;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.client.ScmClient;
+import picocli.CommandLine;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONED;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONING;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+/**
+ * Handler of printTopology command.
+ */
+@CommandLine.Command(
+name = "printTopology",
+description = "Print a tree of the network topology as reported by SCM",
+mixinStandardHelpOptions = true,
+versionProvider = HddsVersionProvider.class)
+public class TopologySubcommand implements Callable {
+
+  @CommandLine.ParentCommand
+  private SCMCLI parent;
+
+  private static List stateArray = new ArrayList<>();
+
+  static {
+stateArray.add(HEALTHY);
+stateArray.add(STALE);
+stateArray.add(DEAD);
+stateArray.add(DECOMMISSIONING);
+stateArray.add(DECOMMISSIONED);
+  }
+
+  @Override
+  public Void call() throws Exception {
+try (ScmClient scmClient = parent.createScmClient()) {
+  for (HddsProtos.NodeState state : stateArray) {
+List nodes = scmClient.queryNode(state,
+HddsProtos.QueryScope.CLUSTER, "");
+if (nodes != null && nodes.size() > 0) {
+  // show node state
+  System.out.println("State = " + state.toString());
 
 Review comment:
   @xiaoyuyao , we can add option to order the output acccording to topology 
layer. For example, for /rack/node topolgy, we can show, 
   State = HEALTHY
   /default-rack: 
   ozone_datanode_1.ozone_default/172.18.0.3   
   ozone_datanode_2.ozone_default/172.18.0.2   
   ozone_datanode_3.ozone_default/172.18.0.4   
   /rack1:
   ozone_datanode_4.ozone_default/172.18.0.5   
   ozone_datanode_5.ozone_default/172.18.0.6   
   For /dc/rack/node topology, we can either show 
   State = HEALTHY
   /default-dc/default-rack: 
   ozone_datanode_1.ozone_default/172.18.0.3   
   ozone_datanode_2.ozone_default/172.18.0.2   
   ozone_datanode_3.ozone_default/172.18.0.4   
   /dc1/rack1:
   ozone_datanode_4.ozone_default/172.18.0.5   
   ozone_datanode_5.ozone_default/172.18.0.6   
   
   or 
   
   State = HEALTHY
   default-dc: 
 default-rack: 
   ozone_datanode_1.ozone_default/172.18.0.3   
   ozone_datanode_2.ozone_default/172.18.0.2   
   ozone_datanode_3.ozone_default/172.18.0.4   
   dc1:
 rack1:
   ozone_datanode_4.ozone_default/172.18.0.5   
   ozone_datanode_5.ozone_default/172.18.0.6   
   
   This can be done in a follow on JIRA. Current plain format just meets our 
needs. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #867: HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #867: HDDS-1605. Implement AuditLogging for OM 
HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#issuecomment-499316838
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 515 | trunk passed |
   | +1 | compile | 294 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 801 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 344 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 541 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for patch |
   | +1 | mvninstall | 466 | the patch passed |
   | +1 | compile | 283 | the patch passed |
   | +1 | javac | 283 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 616 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 98 | hadoop-ozone generated 1 new + 8 unchanged - 0 fixed = 
9 total (was 8) |
   | +1 | findbugs | 533 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 237 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1389 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 59 | The patch does not generate ASF License warnings. |
   | | | 6519 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/867 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4809e1ca7ff6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 294695d |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/5/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/5/testReport/ |
   | Max. process+thread count | 5238 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #867: HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.

2019-06-05 Thread GitBox
bharatviswa504 commented on issue #867: HDDS-1605. Implement AuditLogging for 
OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#issuecomment-499305685
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #912: HDDS-1201. Reporting Corruptions in Containers to SCM

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #912: HDDS-1201. Reporting Corruptions in 
Containers to SCM
URL: https://github.com/apache/hadoop/pull/912#issuecomment-499303878
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 95 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 622 | trunk passed |
   | +1 | compile | 340 | trunk passed |
   | +1 | checkstyle | 84 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 977 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 185 | trunk passed |
   | 0 | spotbugs | 391 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 618 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 545 | the patch passed |
   | +1 | compile | 332 | the patch passed |
   | +1 | javac | 332 | the patch passed |
   | -0 | checkstyle | 48 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 790 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 195 | the patch passed |
   | +1 | findbugs | 629 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 367 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2607 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 94 | The patch does not generate ASF License warnings. |
   | | | 8769 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-912/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/912 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7d60aa1bbb08 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3b1c257 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-912/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-912/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-912/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-912/2/testReport/ |
   | Max. process+thread count | 3198 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-912/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #916: HDDS-1652. HddsDispatcher should not shutdown volumeSet. Contributed …

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #916: HDDS-1652. HddsDispatcher should not 
shutdown volumeSet. Contributed …
URL: https://github.com/apache/hadoop/pull/916#issuecomment-499299860
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 490 | trunk passed |
   | +1 | compile | 294 | trunk passed |
   | +1 | checkstyle | 90 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 879 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 337 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 526 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 465 | the patch passed |
   | +1 | compile | 281 | the patch passed |
   | +1 | javac | 281 | the patch passed |
   | +1 | checkstyle | 83 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 685 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | the patch passed |
   | +1 | findbugs | 536 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 229 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1612 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 59 | The patch does not generate ASF License warnings. |
   | | | 6864 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-916/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/916 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2a664afc4c9d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3b1c257 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-916/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-916/1/testReport/ |
   | Max. process+thread count | 5404 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-916/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #867: HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #867: HDDS-1605. Implement 
AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290978411
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Done. Addressed it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #867: HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #867: HDDS-1605. Implement 
AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290978023
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Yes it can be done. Will update it if you prefer that way?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #867: HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.

2019-06-05 Thread GitBox
hanishakoneru commented on a change in pull request #867: HDDS-1605. Implement 
AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290977673
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Yes I meant why not outside the finally block as generally we keep the must 
be done things (like lock releases) in the finally block.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #915: HDDS-1650. Fix Ozone tests leaking volume checker thread. Contributed…

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #915: HDDS-1650. Fix Ozone tests leaking volume 
checker thread. Contributed…
URL: https://github.com/apache/hadoop/pull/915#issuecomment-499295177
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 156 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 670 | trunk passed |
   | +1 | compile | 323 | trunk passed |
   | +1 | checkstyle | 90 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 987 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | trunk passed |
   | 0 | spotbugs | 376 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 603 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 472 | the patch passed |
   | +1 | compile | 277 | the patch passed |
   | +1 | javac | 277 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 727 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 522 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 166 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1217 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6883 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-915/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/915 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6fb421f8a9b4 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3b1c257 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-915/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-915/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-915/1/testReport/ |
   | Max. process+thread count | 4416 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-915/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #703: HDDS-1371. Download RocksDB checkpoint 
from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#issuecomment-499291766
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 540 | trunk passed |
   | +1 | compile | 309 | trunk passed |
   | +1 | checkstyle | 84 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 941 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 365 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 567 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 34 | Maven dependency ordering for patch |
   | +1 | mvninstall | 507 | the patch passed |
   | +1 | compile | 331 | the patch passed |
   | +1 | javac | 331 | the patch passed |
   | -0 | checkstyle | 44 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 797 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 199 | the patch passed |
   | +1 | findbugs | 584 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 251 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1293 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 7000 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/703 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 78c75eb7ce10 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/8/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/8/testReport/ |
   | Max. process+thread count | 4197 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework hadoop-ozone/client 
hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #914: HDDS-1647 : Recon config tag does not show up on Ozone UI.

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #914: HDDS-1647 : Recon config tag does not 
show up on Ozone UI.
URL: https://github.com/apache/hadoop/pull/914#issuecomment-499289367
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 528 | trunk passed |
   | +1 | compile | 297 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1674 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 470 | the patch passed |
   | +1 | compile | 290 | the patch passed |
   | +1 | javac | 290 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 747 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 264 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1987 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 6067 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-914/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/914 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 8f64a2330199 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-914/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-914/1/testReport/ |
   | Max. process+thread count | 5165 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-914/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-06-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16857143#comment-16857143
 ] 

Hudson commented on HADOOP-16314:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16683 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16683/])
HADOOP-16314.  Make sure all web end points are covered by the same (eyang: rev 
294695dd57cb75f2756a31a54264bdd37b32bb01)
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServerWithSpnego.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsWithAuthenticationFilter.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogLevel.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebAppUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/util/timeline/TimelineServerUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/HttpAuthentication.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestSecureNNWithQJM.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStreamKerberized.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestServletFilter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/amfilter/TestSecureAmFilter.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/WebServlet.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestGlobalFilter.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestPathFilter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java


> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16314-001.patch, HADOOP-16314-002.patch, 
> HADOOP-16314-003.patch, HADOOP-16314-004.patch, HADOOP-16314-005.patch, 
> HADOOP-16314-006.patch, HADOOP-16314-007.patch, Hadoop Web Security.xlsx, 
> scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #867: HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #867: HDDS-1605. Implement 
AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290969230
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Yes, it is not to do logging/expensive things inside the lock. As after 
adding the response to cache, we can release the lock, so that other threads 
waiting for the lock can acquire it. (In future we don't see performance things 
because of audit log we are seeing some performance issues.) 
   
   More on a side note, doing audit logging outside lock will not cause any 
side effects in my view. Let me know if anything I am missing here.
   
   Edit:
   Now I got it, we can do this outside the finally block also. But I think it 
should be fine any way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #867: HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #867: HDDS-1605. Implement 
AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290969230
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Yes, it is not to do logging/expensive things inside the lock. As after 
adding the response to cache, we can release the lock, so that other threads 
waiting for the lock can acquire it. (In future we don't see performance things 
because of audit log we are seeing some performance issues.) 
   
   More on a side note, doing audit logging outside lock will not cause any 
side effects in my view. Let me know if anything I am missing here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #867: HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #867: HDDS-1605. Implement 
AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290969230
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Yes, it not to do logging/expensive things inside the lock. As after adding 
the response to cache, we can release the lock, so that other threads waiting 
for the lock can acquire it. (In future we don't see performance things because 
of audit log we are seeing some performance issues.) 
   
   More on a side note, doing audit logging outside lock will not cause any 
side effects in my view. Let me know if anything I am missing here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #867: HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.

2019-06-05 Thread GitBox
hanishakoneru commented on a change in pull request #867: HDDS-1605. Implement 
AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290968001
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Any reason for having the auditLog in the finally block?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16095) Support impersonation for AuthenticationFilter

2019-06-05 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-16095.

   Resolution: Fixed
Fix Version/s: 3.3.0

The current implementation is based on option 1.  All sub-tasks have been 
close.  Mark this issue as resolved.

> Support impersonation for AuthenticationFilter
> --
>
> Key: HADOOP-16095
> URL: https://issues.apache.org/jira/browse/HADOOP-16095
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16095.004.patch
>
>
> External services or YARN service may need to call into WebHDFS or YARN REST 
> API on behave of the user using web protocols. It would be good to support 
> impersonation mechanism in AuthenticationFilter or similar extensions. The 
> general design is similar to UserGroupInformation.doAs in RPC layer.
> The calling service credential is verified as a proxy user coming from a 
> trusted host verifying Hadoop proxy user ACL on the server side. If proxy 
> user ACL allows proxy user to become doAs user. HttpRequest object will 
> report REMOTE_USER as doAs user. This feature enables web application logic 
> to be written with minimal changes to call Hadoop API with 
> UserGroupInformation.doAs() wrapper.
> h2. HTTP Request
> A few possible options:
> 1. Using query parameter to pass doAs user:
> {code:java}
> POST /service?doAs=foobar
> Authorization: [proxy user Kerberos token]
> {code}
> 2. Use HTTP Header to pass doAs user:
> {code:java}
> POST /service
> Authorization: [proxy user Kerberos token]
> x-hadoop-doas: foobar
> {code}
> h2. HTTP Response
> 403 - Forbidden (Including impersonation is not allowed)
> h2. Proxy User ACL requirement
> Proxy user kerberos token maps to a service principal, such as 
> yarn/host1.example.com. The host part of the credential and HTTP request 
> origin are both validated with *hadoop.proxyuser.yarn.hosts* ACL. doAs user 
> group membership or identity is checked with either 
> *hadoop.proxyuser.yarn.groups* or *hadoop.proxyuser.yarn.users*. This governs 
> the caller is coming from authorized host and belong to authorized group.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-06-05 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16314:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thank you [~Prabhu Joseph] for the patch.
I just committed this to trunk.

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16314-001.patch, HADOOP-16314-002.patch, 
> HADOOP-16314-003.patch, HADOOP-16314-004.patch, HADOOP-16314-005.patch, 
> HADOOP-16314-006.patch, HADOOP-16314-007.patch, Hadoop Web Security.xlsx, 
> scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #703: HDDS-1371. Download RocksDB checkpoint 
from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#issuecomment-499284692
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 500 | trunk passed |
   | +1 | compile | 286 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 970 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | trunk passed |
   | 0 | spotbugs | 373 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 587 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 492 | the patch passed |
   | +1 | compile | 290 | the patch passed |
   | +1 | javac | 290 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 724 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | the patch passed |
   | +1 | findbugs | 616 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 269 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1315 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 6876 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/703 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 5add6ca39b8a 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/7/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/7/testReport/ |
   | Max. process+thread count | 4608 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework hadoop-ozone/client 
hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] eyanghwx commented on issue #840: HDDS-1565. Rename k8s-dev and k8s-dev-push profiles to docker and docker-push

2019-06-05 Thread GitBox
eyanghwx commented on issue #840: HDDS-1565. Rename k8s-dev and k8s-dev-push 
profiles to docker and docker-push
URL: https://github.com/apache/hadoop/pull/840#issuecomment-499282091
 
 
   @bharatviswa504 yes, docker is shorter to type.  I reopened HDDS-1565 though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #840: HDDS-1565. Rename k8s-dev and k8s-dev-push profiles to docker and docker-push

2019-06-05 Thread GitBox
bharatviswa504 edited a comment on issue #840: HDDS-1565. Rename k8s-dev and 
k8s-dev-push profiles to docker and docker-push
URL: https://github.com/apache/hadoop/pull/840#issuecomment-499280615
 
 
   Hi @eyanghwx 
   Jira title has this description.
   Rename k8s-dev and k8s-dev-push profiles to docker-build and docker-push
   
   As the commit message is having docker, it might be confusing. But jira and 
jira description has mentioned that to rename it to docker-build.
   
   Let me know if you want this to be changed to docker instead of 
docker-build? I can open a new jira for that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #840: HDDS-1565. Rename k8s-dev and k8s-dev-push profiles to docker and docker-push

2019-06-05 Thread GitBox
bharatviswa504 commented on issue #840: HDDS-1565. Rename k8s-dev and 
k8s-dev-push profiles to docker and docker-push
URL: https://github.com/apache/hadoop/pull/840#issuecomment-499280615
 
 
   Hi @eyanghwx 
   Jira title has this description.
   Rename k8s-dev and k8s-dev-push profiles to docker-build and docker-push
   
   As the commit message is having docker, it might be confusing. But jira and 
jira description has mentioned that to rename it to docker-build.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] eyanghwx commented on issue #840: HDDS-1565. Rename k8s-dev and k8s-dev-push profiles to docker and docker-push

2019-06-05 Thread GitBox
eyanghwx commented on issue #840: HDDS-1565. Rename k8s-dev and k8s-dev-push 
profiles to docker and docker-push
URL: https://github.com/apache/hadoop/pull/840#issuecomment-499279608
 
 
   The implementation is different from the description.  It would be easier to 
call it docker instead of docker-build to make the command less characters to 
type.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #804: HDDS-1496. Support partial chunk reads and checksum verification

2019-06-05 Thread GitBox
bharatviswa504 commented on issue #804: HDDS-1496. Support partial chunk reads 
and checksum verification
URL: https://github.com/apache/hadoop/pull/804#issuecomment-499279312
 
 
   Hi @hanishakoneru 
   Thanks for the update. I see some of the old comments are not addressed.
   I have a few minor Nits.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16117) Upgrade to latest AWS SDK

2019-06-05 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16857114#comment-16857114
 ] 

Steve Loughran commented on HADOOP-16117:
-

PR is now at SDK version 1.11.563; Tests working well

I've been using this with the HADOOP-15663 PR to create on-demand DDB tables 
(#879) and I have not encountered problems. IMO, this update is ready to go 
with trunk for broader testing

+[~fabbri] [~mackrorysd]

> Upgrade to latest AWS SDK
> -
>
> Key: HADOOP-16117
> URL: https://issues.apache.org/jira/browse/HADOOP-16117
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Upgrade to the most recent AWS SDK. That's 1.11; even though there's a 2.0 
> out it'll be more significant an upgrade, with impact downstream.
> The new [AWS SDK update 
> process|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md#-qualifying-an-aws-sdk-update]
>  *must* be followed, and we should plan for 1-2 surprises afterwards anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #818: HADOOP-16117 Update AWS SDK to 1.11.563

2019-06-05 Thread GitBox
steveloughran commented on issue #818: HADOOP-16117 Update AWS SDK to 1.11.563
URL: https://github.com/apache/hadoop/pull/818#issuecomment-499277151
 
 
   Latest patch: tested s3 ireland DDB+ scale. Followed SDK update protocol in 
docs and added some more commands for the runbook: expunge and `hdfs fetchdt`. 
Couldn't use `hadoop dtutil` on the CLI as I couldn't use a -D option to set 
the delegation tokens in that command.
   
   I've been using this with the HADOOP-15663 PR to create on-demand DDB tables 
(#879) and I have not encountered problems. IMO, this update is ready to go 
with trunk for broader testing
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16328) ClassCastException in S3GuardTool.checkMetadataStoreUri

2019-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16328.
-
Resolution: Cannot Reproduce

made this go away when I brought my branch up to sync with trunk: there's 
enough arg checking there that this problem doesn't surface. Closing

> ClassCastException in S3GuardTool.checkMetadataStoreUri
> ---
>
> Key: HADOOP-16328
> URL: https://issues.apache.org/jira/browse/HADOOP-16328
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> {ClassCastException in S3GuardTool.checkMetadataStoreUri() during a run of 
> {{ITestS3GuardToolDynamoDB.testDestroyFailsIfNoBucketNameOrDDBTableSet()}}
> stack trace is lost, but cause is that the FS returned by newInstance() isn't 
> an S3AFS, it's a local FS. 
> Irrespective of underlying cause, the s3guard tool code should check the 
> class type and raise an exception including the FS URI on a mismatch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #849: HADOOP-16328: ClassCastException in S3GuardTool.checkMetadataStoreUri

2019-06-05 Thread GitBox
steveloughran closed pull request #849: HADOOP-16328: ClassCastException in 
S3GuardTool.checkMetadataStoreUri
URL: https://github.com/apache/hadoop/pull/849
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #849: HADOOP-16328: ClassCastException in S3GuardTool.checkMetadataStoreUri

2019-06-05 Thread GitBox
steveloughran commented on issue #849: HADOOP-16328: ClassCastException in 
S3GuardTool.checkMetadataStoreUri
URL: https://github.com/apache/hadoop/pull/849#issuecomment-499275156
 
 
   made this go away when I brought my branch up to sync with trunk: there's 
enough arg checking there that this problem doesn't surface. Closing


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #804: HDDS-1496. Support partial chunk reads and checksum verification

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #804: HDDS-1496. Support 
partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r290956283
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 ##
 @@ -346,28 +349,34 @@ protected ByteString readChunk(ChunkInfo readChunkInfo) 
throws IOException {
 
   private CheckedBiFunction validator =
-  (request, response) -> {
-final ChunkInfo chunkInfo = request.getReadChunk().getChunkData();
-
-ReadChunkResponseProto readChunkResponse = response.getReadChunk();
-ByteString byteString = readChunkResponse.getData();
-
-if (byteString.size() != chunkInfo.getLen()) {
-  // Bytes read from chunk should be equal to chunk size.
-  throw new OzoneChecksumException(String
-  .format("Inconsistent read for chunk=%s len=%d bytesRead=%d",
-  chunkInfo.getChunkName(), chunkInfo.getLen(),
-  byteString.size()));
-}
-
-if (verifyChecksum) {
-  ChecksumData checksumData =
-  ChecksumData.getFromProtoBuf(chunkInfo.getChecksumData());
-  int checkumStartIndex =
-  (int) (bufferOffset / checksumData.getBytesPerChecksum());
-  Checksum.verifyChecksum(byteString, checksumData, checkumStartIndex);
-}
-  };
+  (request, response) -> {
+final ChunkInfo reqChunkInfo =
+request.getReadChunk().getChunkData();
+
+ReadChunkResponseProto readChunkResponse = response.getReadChunk();
+ByteString byteString = readChunkResponse.getData();
+
+if (byteString.size() != reqChunkInfo.getLen()) {
+  // Bytes read from chunk should be equal to chunk size.
+  throw new OzoneChecksumException(String
+  .format("Inconsistent read for chunk=%s len=%d bytesRead=%d",
+  reqChunkInfo.getChunkName(), reqChunkInfo.getLen(),
+  byteString.size()));
+}
+
+if (verifyChecksum) {
+  ChecksumData checksumData = ChecksumData.getFromProtoBuf(
+  chunkInfo.getChecksumData());
+
+  // ChecksumData stores checksum for each 'numBytesPerChceksum'
 
 Review comment:
   Typo. numBytesPerChceksum -> numBytesPerChecksum 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao opened a new pull request #916: HDDS-1652. HddsDispatcher should not shutdown volumeSet. Contributed …

2019-06-05 Thread GitBox
xiaoyuyao opened a new pull request #916: HDDS-1652. HddsDispatcher should not 
shutdown volumeSet. Contributed …
URL: https://github.com/apache/hadoop/pull/916
 
 
   …by Xiaoyu Yao.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #877: HDDS-1618. Merge code for HA and Non-HA 
OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#issuecomment-499272954
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 834 | trunk passed |
   | +1 | compile | 466 | trunk passed |
   | +1 | checkstyle | 124 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1064 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   | 0 | spotbugs | 340 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 534 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 500 | the patch passed |
   | +1 | compile | 330 | the patch passed |
   | +1 | javac | 330 | the patch passed |
   | -0 | checkstyle | 46 | hadoop-ozone: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 703 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | +1 | findbugs | 585 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 299 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1386 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 63 | The patch does not generate ASF License warnings. |
   | | | 7685 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/877 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 837c14e0ac56 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/6/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/6/testReport/ |
   | Max. process+thread count | 4857 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #804: HDDS-1496. Support partial chunk reads and checksum verification

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #804: HDDS-1496. Support 
partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r290949158
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
 ##
 @@ -115,8 +116,13 @@ public BlockInputStream(BlockID blockId, long blockLen, 
Pipeline pipeline,
*/
   public synchronized void initialize() throws IOException {
 
-List chunks = getChunkInfos();
+// Pre-check that the stream has not been intialized already
+if (initialized) {
+  return;
+}
+Preconditions.checkArgument(chunkOffsets == null);
 
 Review comment:
   Do we need this Precondition check?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
hanishakoneru commented on issue #703: HDDS-1371. Download RocksDB checkpoint 
from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#issuecomment-499267191
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao opened a new pull request #915: HDDS-1650. Fix Ozone tests leaking volume checker thread. Contributed…

2019-06-05 Thread GitBox
xiaoyuyao opened a new pull request #915: HDDS-1650. Fix Ozone tests leaking 
volume checker thread. Contributed…
URL: https://github.com/apache/hadoop/pull/915
 
 
   … by Xiaoyu Yao.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao merged pull request #885: HDDS-1541. Implement addAcl, removeAcl, setAcl, getAcl for Key. Contributed by Ajay Kumat.

2019-06-05 Thread GitBox
xiaoyuyao merged pull request #885: HDDS-1541. Implement 
addAcl,removeAcl,setAcl,getAcl for Key. Contributed by Ajay Kumat.
URL: https://github.com/apache/hadoop/pull/885
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx opened a new pull request #914: HDDS-1647 : Recon config tag does not show up on Ozone UI.

2019-06-05 Thread GitBox
avijayanhwx opened a new pull request #914: HDDS-1647 : Recon config tag does 
not show up on Ozone UI.
URL: https://github.com/apache/hadoop/pull/914
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx closed pull request #913: HDDS-1647 : Recon config tag does not show up on Ozone UI

2019-06-05 Thread GitBox
avijayanhwx closed pull request #913: HDDS-1647 : Recon config tag does not 
show up on Ozone UI
URL: https://github.com/apache/hadoop/pull/913
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx opened a new pull request #913: HDDS-1647 : Recon config tag does not show up on Ozone UI

2019-06-05 Thread GitBox
avijayanhwx opened a new pull request #913: HDDS-1647 : Recon config tag does 
not show up on Ozone UI
URL: https://github.com/apache/hadoop/pull/913
 
 
   Recon tag does not show up on the list of tags on /conf page.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download 
RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#discussion_r290941336
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -1700,6 +1700,45 @@
   .
   
 
+  
+ozone.om.ratis.snapshot.dir
+
+OZONE, OM, STORAGE, MANAGEMENT, RATIS
+This directory is used for storing OM's snapshot
+  related files like the ratisSnapshotIndex and DB checkpoint from leader
+  OM.
+  If undefined, OM snapshot dir will fallback to 
ozone.om.ratis.storage.dir.
 
 Review comment:
   Ya ur right. In my earlier comment, the last thing I mean is 
ozone.metadata.dir, but written it wrongly.
   Ya, it looks good to me. As we already have fall back mentioned for 
ozone.om.ratis.storage.dir during the property description.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
hanishakoneru commented on a change in pull request #703: HDDS-1371. Download 
RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#discussion_r290939438
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -1700,6 +1700,45 @@
   .
   
 
+  
+ozone.om.ratis.snapshot.dir
+
+OZONE, OM, STORAGE, MANAGEMENT, RATIS
+This directory is used for storing OM's snapshot
+  related files like the ratisSnapshotIndex and DB checkpoint from leader
+  OM.
+  If undefined, OM snapshot dir will fallback to 
ozone.om.ratis.storage.dir.
 
 Review comment:
   The fallback is in this order:
   ozone.om.ratis.snapshot.dir -> ozone.om.ratis.storage.dir -> 
ozone.metadata.dir


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download 
RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#discussion_r290936425
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -1700,6 +1700,45 @@
   .
   
 
+  
+ozone.om.ratis.snapshot.dir
+
+OZONE, OM, STORAGE, MANAGEMENT, RATIS
+This directory is used for storing OM's snapshot
+  related files like the ratisSnapshotIndex and DB checkpoint from leader
+  OM.
+  If undefined, OM snapshot dir will fallback to 
ozone.om.ratis.storage.dir.
 
 Review comment:
   Minor: In code if ozone.om.ratis.snapshot.dir not defined, next fallback is 
om.ratis.storage.dir, if this too is not set it fall backs to 
ozone.om.ratis.storage.dir.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download 
RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#discussion_r290931954
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
 ##
 @@ -474,5 +474,4 @@ public static InetSocketAddress 
getScmAddressForSecurityProtocol(
 return NetUtils.createSocketAddr(host.get() + ":" + port
 .orElse(ScmConfigKeys.OZONE_SCM_SECURITY_SERVICE_PORT_DEFAULT));
   }
-
 
 Review comment:
   Not needed change. As this file has only this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download 
RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#discussion_r290932027
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
 ##
 @@ -435,7 +435,7 @@
   OZONE_FREON_HTTP_KERBEROS_KEYTAB_FILE_KEY =
   "ozone.freon.http.kerberos.keytab";
 
-  /**
+   /**
 
 Review comment:
   Minor: Not needed change. As this file has only this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download 
RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#discussion_r290931954
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
 ##
 @@ -474,5 +474,4 @@ public static InetSocketAddress 
getScmAddressForSecurityProtocol(
 return NetUtils.createSocketAddr(host.get() + ":" + port
 .orElse(ScmConfigKeys.OZONE_SCM_SECURITY_SERVICE_PORT_DEFAULT));
   }
-
 
 Review comment:
   Minor: Not needed change. As this file has only this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download 
RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#discussion_r290932027
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
 ##
 @@ -435,7 +435,7 @@
   OZONE_FREON_HTTP_KERBEROS_KEYTAB_FILE_KEY =
   "ozone.freon.http.kerberos.keytab";
 
-  /**
+   /**
 
 Review comment:
   Not needed change. As this file has only this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] eyanghwx commented on a change in pull request #905: HDDS-1508. Provide example k8s deployment files for the new CSI server

2019-06-05 Thread GitBox
eyanghwx commented on a change in pull request #905: HDDS-1508. Provide example 
k8s deployment files for the new CSI server
URL: https://github.com/apache/hadoop/pull/905#discussion_r290931278
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/k8s/examples/ozone-csi/datanode-daemonset.yaml
 ##
 @@ -0,0 +1,56 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: datanode
+  labels:
+app.kubernetes.io/component: ozone
+spec:
+  selector:
+matchLabels:
+  app: ozone
+  component: datanode
+  template:
+metadata:
+  annotations:
+prometheus.io/scrape: "true"
+prometheus.io/port: "9882"
+prometheus.io/path: /prom
+  labels:
+app: ozone
+component: datanode
+spec:
+  containers:
+  - name: datanode
+image: '@docker.image@'
+args:
+- ozone
+- datanode
+ports:
+- containerPort: 9870
+  name: rpc
+envFrom:
+- configMapRef:
+name: config
+volumeMounts:
+- name: data
+  mountPath: /data
+  initContainers: []
+  volumes:
+  - name: data
+emptyDir: {}
 
 Review comment:
   The suggestion of maven build directory in the context of spawning single 
node k8s cluster in maven for integration test purpose.  In real distributed 
cluster, mount path is user configurable, and admin is still responsible for 
keeping the mount path in correct state.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13980) S3Guard CLI: Add fsck check command

2019-06-05 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16857029#comment-16857029
 ] 

Aaron Fabbri edited comment on HADOOP-13980 at 6/5/19 8:55 PM:
---

Thanks for your draft of FSCK requirements [~ste...@apache.org]. This is a good 
start.

One thing that comes to mind: I don't know that we want to consider "auth mode" 
as a factor here.  Erring on the side of over-explaining this stuff for clarity:

There are two main authoritative mode flags in play:

(1) per-directory metastore bit that says "this directory is fully loaded into 
the metastore"

(2) s3a client config bit fs.s3a.metadatastore.authoritative, which allows s3a 
to short-circuit (skip) s3 on some metadata queries. This one is just a runtime 
client behavior flag. You could have multiple clients with different settings 
sharing a bucket. FSCK could also have a different config.  I think you'll 
still want some FSCK options to select the level of enforcement / paranoia as 
you outline, just don't think it needs to be conflated with client's allow auth 
flag. I'd imagine this as a growing set of invariant checks that can be 
categorized into something like basic / paranoid / full.

Whether or not a s3a client has metadatastore.authoritative bit set in its 
config doesn't really affect the contents of the metadata store or its 
relationship to the underlying storage (s3) state\*.  If the is_authoritative 
bit is set on a directory in the metastore, however, that directory listing 
from metadatastore should *match* the listing of that dir from s3. If the bit 
is not set, the metastore listing should be a subset of the s3 listing.

I would also split the consistency checks into two categories: 
MetadataStore-specific, and generic. Majority of the stuff here are generic 
tests that work with any MetadataStore. DDB also needs to check its internal 
consistency (since it uses the ancestor-exists invariant to avoid table scans).

Also agreed you'll need table scans here–but how do we expose this for FSCK 
only? FSCK traditionally reaches below the FS to check its structures. (e.g. 
ext3 fsck uses a block device below the ext3 fs to check on disk format, 
right?).

\* some nuance here, if we want to discuss further.


was (Author: fabbri):
Thanks for your draft of FSCK requirements [~ste...@apache.org]. This is a good 
start.

One thing that comes to mind: I don't know that we want to consider "auth mode" 
as a factor here.  Erring on the side of over-explaining this stuff for clarity:

There are two main authoritative mode flags in play:

(1) per-directory metastore bit that says "this directory is fully loaded into 
the metastore"

(2) s3a client config bit fs.s3a.metadatastore.authoritative, which allows s3a 
to short-circuit (skip) s3 on some metadata queries. This one is just a runtime 
client behavior flag. You could have multiple clients with different settings 
sharing a bucket. FSCK could also have a different config.  I think you'll 
still want some FSCK options to select the level of enforcement / paranoia as 
you outline, just don't think it needs to be conflated with client's allow auth 
flag. I'd imagine this as a growing set of invariant checks that can be 
categorized into something like basic / paranoid / full.

Whether or not a s3a client has metadatastore.authoritative bit set in its 
config doesn't really affect the contents of the metadata store or its 
relationship to the underlying storage (s3) state**.  If the is_authoritative 
bit is set on a directory in the metastore, however, that directory listing 
from metadatastore should *match* the listing of that dir from s3. If the bit 
is not set, the metastore listing should be a subset of the s3 listing.

I would also split the consistency checks into two categories: 
MetadataStore-specific, and generic. Majority of the stuff here are generic 
tests that work with any MetadataStore. DDB also needs to check its internal 
consistency (since it uses the ancestor-exists invariant to avoid table scans).

Also agreed you'll need table scans here–but how do we expose this for FSCK 
only? FSCK traditionally reaches below the FS to check its structures. (e.g. 
ext3 fsck uses a block device below the ext3 fs to check on disk format, 
right?).

 

** some nuance here, if we want to discuss further.

> S3Guard CLI: Add fsck check command
> ---
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 

[GitHub] [hadoop] hanishakoneru commented on a change in pull request #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
hanishakoneru commented on a change in pull request #703: HDDS-1371. Download 
RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#discussion_r290930844
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -536,14 +535,6 @@ private void setOMNodeDetails(String serviceId, String 
nodeId,
 configuration.set(OZONE_OM_ADDRESS_KEY,
 NetUtils.getHostPortString(rpcAddress));
 
-// Find the Ratis storage dir
-String omRatisDirectory = OmUtils.getOMRatisDirectory(configuration);
-if (omRatisDirectory == null || omRatisDirectory.isEmpty()) {
-  throw new IllegalArgumentException(HddsConfigKeys.OZONE_METADATA_DIRS +
-  " must be defined.");
-}
-
-omRatisStorageDir = OmUtils.createOMDir(omRatisDirectory);
 
 Review comment:
   Thank you @bharatviswa504 for catching this. This was by mistake. Added it 
back.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13980) S3Guard CLI: Add fsck check command

2019-06-05 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16857029#comment-16857029
 ] 

Aaron Fabbri commented on HADOOP-13980:
---

Thanks for your draft of FSCK requirements [~ste...@apache.org]. This is a good 
start.

One thing that comes to mind: I don't know that we want to consider "auth mode" 
as a factor here.  Erring on the side of over-explaining this stuff for clarity:

There are two main authoritative mode flags in play:

(1) per-directory metastore bit that says "this directory is fully loaded into 
the metastore"

(2) s3a client config bit fs.s3a.metadatastore.authoritative, which allows s3a 
to short-circuit (skip) s3 on some metadata queries. This one is just a runtime 
client behavior flag. You could have multiple clients with different settings 
sharing a bucket. FSCK could also have a different config.  I think you'll 
still want some FSCK options to select the level of enforcement / paranoia as 
you outline, just don't think it needs to be conflated with client's allow auth 
flag. I'd imagine this as a growing set of invariant checks that can be 
categorized into something like basic / paranoid / full.

Whether or not a s3a client has metadatastore.authoritative bit set in its 
config doesn't really affect the contents of the metadata store or its 
relationship to the underlying storage (s3) state**.  If the is_authoritative 
bit is set on a directory in the metastore, however, that directory listing 
from metadatastore should *match* the listing of that dir from s3. If the bit 
is not set, the metastore listing should be a subset of the s3 listing.

I would also split the consistency checks into two categories: 
MetadataStore-specific, and generic. Majority of the stuff here are generic 
tests that work with any MetadataStore. DDB also needs to check its internal 
consistency (since it uses the ancestor-exists invariant to avoid table scans).

Also agreed you'll need table scans here–but how do we expose this for FSCK 
only? FSCK traditionally reaches below the FS to check its structures. (e.g. 
ext3 fsck uses a block device below the ext3 fs to check on disk format, 
right?).

 

** some nuance here, if we want to discuss further.

> S3Guard CLI: Add fsck check command
> ---
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 
> invariants are violated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #867: HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #867: HDDS-1605. Implement AuditLogging for OM 
HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#issuecomment-499251087
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 480 | trunk passed |
   | +1 | compile | 266 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 785 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | trunk passed |
   | 0 | spotbugs | 322 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 510 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 462 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 610 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 84 | hadoop-ozone generated 1 new + 8 unchanged - 0 fixed = 
9 total (was 8) |
   | +1 | findbugs | 533 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 243 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2426 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 7337 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/867 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 690734c4a711 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/4/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/4/testReport/ |
   | Max. process+thread count | 4434 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#issuecomment-499245264
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 11 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 538 | trunk passed |
   | +1 | compile | 320 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 943 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 354 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 562 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 495 | the patch passed |
   | +1 | compile | 316 | the patch passed |
   | +1 | cc | 316 | the patch passed |
   | -1 | javac | 212 | hadoop-ozone generated 4 new + 3 unchanged - 0 fixed = 
7 total (was 3) |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | the patch passed |
   | +1 | findbugs | 552 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 246 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1364 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 6914 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/884 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux 56e663f1aba4 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/7/artifact/out/diff-compile-javac-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/7/testReport/ |
   | Max. process+thread count | 4582 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on issue #885: HDDS-1541. Implement addAcl, removeAcl, setAcl, getAcl for Key. Contributed by Ajay Kumat.

2019-06-05 Thread GitBox
ajayydv commented on issue #885: HDDS-1541. Implement 
addAcl,removeAcl,setAcl,getAcl for Key. Contributed by Ajay Kumat.
URL: https://github.com/apache/hadoop/pull/885#issuecomment-499239660
 
 
   Both test failures pass locally, seems unrelated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #877: HDDS-1618. Merge 
code for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#discussion_r290907867
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
 ##
 @@ -52,6 +54,10 @@
   private final RequestHandler handler;
   private final boolean isRatisEnabled;
   private final OzoneManager ozoneManager;
+  private final OMMetadataManager omMetadataManager;
+  // Used during Non-HA when calling validateAndUpdateCache methods in
+  // OMClientRequest. As in Non-HA with out ratis, we don't use this.
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #877: HDDS-1618. Merge 
code for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#discussion_r290907949
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMClientResponse.java
 ##
 @@ -53,5 +53,16 @@ public OMResponse getOMResponse() {
 return omResponse;
   }
 
+
+  /**
+   * For Non-HA add response to OM DB. As for Non-HA we cannot use double
+   * buffer and add response to cache and then return response to the client,
+   * as when flush is missed in HA, ratis has provided guaranty to apply the
+   * transactions again. In Non-HA, we cannot use the same model, so we need
 
 Review comment:
   Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #877: HDDS-1618. Merge 
code for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#discussion_r290907903
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMVolumeDeleteResponse.java
 ##
 @@ -59,5 +59,18 @@ public void addToDBBatch(OMMetadataManager 
omMetadataManager,
 omMetadataManager.getVolumeKey(volume));
   }
 
+  @Override
+  public void addResponseToOMDB(OMMetadataManager omMetadataManager)
+  throws IOException {
+String dbUserKey = omMetadataManager.getUserKey(owner);
+VolumeList volumeList = updatedVolumeList;
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #703: HDDS-1371. Download RocksDB checkpoint 
from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#issuecomment-499226601
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 63 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 535 | trunk passed |
   | +1 | compile | 287 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 950 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 382 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 587 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 473 | the patch passed |
   | +1 | compile | 291 | the patch passed |
   | +1 | javac | 291 | the patch passed |
   | -0 | checkstyle | 46 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 733 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 62 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 101 | hadoop-ozone generated 7 new + 8 unchanged - 0 fixed 
= 15 total (was 8) |
   | -1 | findbugs | 86 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 264 | hadoop-hdds in the patch passed. |
   | -1 | unit | 59 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 5314 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/703 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 2f113408fdf3 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/6/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/6/artifact/out/patch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/6/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/6/testReport/ |
   | Max. process+thread count | 403 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework hadoop-ozone/client 
hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #877: HDDS-1618. Merge 
code for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#discussion_r290901928
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
 ##
 @@ -99,7 +106,36 @@ public OMResponse submitRequest(RpcController controller,
   return submitRequestToRatis(request);
 }
   } else {
-return submitRequestDirectlyToOM(request);
+try {
+  OMClientRequest omClientRequest =
+  OzoneManagerRatisUtils.createClientRequest(request);
+  if (omClientRequest != null) {
+request = omClientRequest.preExecute(ozoneManager);
+  } else {
+// Still work is ongoing, so for some of the requests still
+// following older approach.
+return submitRequestDirectlyToOM(request);
+  }
+} catch (IOException ex) {
+  // As some of the preExecute returns error. So handle here.
+  return createErrorResponse(request, ex);
+}
+OMClientRequest omClientRequest = OzoneManagerRatisUtils
+.createClientRequest(request);
+
 
 Review comment:
   This is because in preExecute, if the request is modified, we don't set it 
back.
   I am planning to change this behavior in Volume Jira implementation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #802: HADOOP-16279. S3Guard: Implement time-based (TTL) expiry for entries …

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #802: HADOOP-16279. S3Guard: Implement 
time-based (TTL) expiry for entries …
URL: https://github.com/apache/hadoop/pull/802#issuecomment-499220737
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1016 | trunk passed |
   | +1 | compile | 1061 | trunk passed |
   | +1 | checkstyle | 141 | trunk passed |
   | +1 | mvnsite | 130 | trunk passed |
   | +1 | shadedclient | 1011 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 86 | trunk passed |
   | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 171 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 75 | the patch passed |
   | +1 | compile | 1023 | the patch passed |
   | +1 | javac | 1023 | the patch passed |
   | -0 | checkstyle | 143 | root: The patch generated 4 new + 49 unchanged - 3 
fixed = 53 total (was 52) |
   | +1 | mvnsite | 126 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 657 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 103 | the patch passed |
   | +1 | findbugs | 201 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 507 | hadoop-common in the patch passed. |
   | +1 | unit | 289 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 6872 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/802 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 46b163e76ea7 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/15/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/15/testReport/ |
   | Max. process+thread count | 1463 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/15/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #912: HDDS-1201. Reporting Corruptions in Containers to SCM

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #912: HDDS-1201. Reporting Corruptions in 
Containers to SCM
URL: https://github.com/apache/hadoop/pull/912#issuecomment-499217873
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 505 | trunk passed |
   | +1 | compile | 295 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 890 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | trunk passed |
   | 0 | spotbugs | 329 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 517 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 461 | the patch passed |
   | +1 | compile | 304 | the patch passed |
   | +1 | javac | 304 | the patch passed |
   | -0 | checkstyle | 47 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 692 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   | +1 | findbugs | 532 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 228 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1216 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 61 | The patch does not generate ASF License warnings. |
   | | | 6464 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-912/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/912 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b3ff66d55bf8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-912/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-912/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-912/1/testReport/ |
   | Max. process+thread count | 4677 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-912/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #877: HDDS-1618. Merge 
code for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#discussion_r290893163
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMVolumeCreateResponse.java
 ##
 @@ -66,5 +66,16 @@ public OmVolumeArgs getOmVolumeArgs() {
 return omVolumeArgs;
   }
 
+  @Override
+  public void addResponseToOMDB(OMMetadataManager omMetadataManager)
+  throws IOException {
+String dbVolumeKey =
+omMetadataManager.getVolumeKey(omVolumeArgs.getVolume());
+String dbUserKey =
+omMetadataManager.getUserKey(omVolumeArgs.getOwnerName());
+
+omMetadataManager.getVolumeTable().put(dbVolumeKey, omVolumeArgs);
+omMetadataManager.getUserTable().put(dbUserKey, volumeList);
 
 Review comment:
   volumeList has previous volumes and currentVolume which is being created now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #885: HDDS-1541. Implement addAcl, removeAcl, setAcl, getAcl for Key. Contributed by Ajay Kumat.

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #885: HDDS-1541. Implement 
addAcl,removeAcl,setAcl,getAcl for Key. Contributed by Ajay Kumat.
URL: https://github.com/apache/hadoop/pull/885#issuecomment-499212328
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 7 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | +1 | mvninstall | 564 | trunk passed |
   | +1 | compile | 298 | trunk passed |
   | +1 | checkstyle | 93 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 897 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 181 | trunk passed |
   | 0 | spotbugs | 335 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 527 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 498 | the patch passed |
   | +1 | compile | 313 | the patch passed |
   | +1 | cc | 313 | the patch passed |
   | +1 | javac | 313 | the patch passed |
   | +1 | checkstyle | 99 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | +1 | findbugs | 539 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 290 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1511 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 7031 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-885/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/885 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 8586a60de279 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-885/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-885/9/testReport/ |
   | Max. process+thread count | 4369 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-885/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
bharatviswa504 commented on a change in pull request #703: HDDS-1371. Download 
RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#discussion_r290886074
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -536,14 +535,6 @@ private void setOMNodeDetails(String serviceId, String 
nodeId,
 configuration.set(OZONE_OM_ADDRESS_KEY,
 NetUtils.getHostPortString(rpcAddress));
 
-// Find the Ratis storage dir
-String omRatisDirectory = OmUtils.getOMRatisDirectory(configuration);
-if (omRatisDirectory == null || omRatisDirectory.isEmpty()) {
-  throw new IllegalArgumentException(HddsConfigKeys.OZONE_METADATA_DIRS +
-  " must be defined.");
-}
-
-omRatisStorageDir = OmUtils.createOMDir(omRatisDirectory);
 
 Review comment:
   Why this code is completely removed, now ratis Storage creation/ check is 
done in a different place?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #910: HDDS-1612. Add 'scmcli printTopology' shell command to print datanode…

2019-06-05 Thread GitBox
xiaoyuyao commented on a change in pull request #910: HDDS-1612. Add 'scmcli 
printTopology' shell command to print datanode…
URL: https://github.com/apache/hadoop/pull/910#discussion_r290880280
 
 

 ##
 File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
 ##
 @@ -0,0 +1,80 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.cli;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.client.ScmClient;
+import picocli.CommandLine;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONED;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONING;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+/**
+ * Handler of printTopology command.
+ */
+@CommandLine.Command(
+name = "printTopology",
+description = "Print a tree of the network topology as reported by SCM",
+mixinStandardHelpOptions = true,
+versionProvider = HddsVersionProvider.class)
+public class TopologySubcommand implements Callable {
+
+  @CommandLine.ParentCommand
+  private SCMCLI parent;
+
+  private static List stateArray = new ArrayList<>();
+
+  static {
+stateArray.add(HEALTHY);
+stateArray.add(STALE);
+stateArray.add(DEAD);
+stateArray.add(DECOMMISSIONING);
+stateArray.add(DECOMMISSIONED);
+  }
+
+  @Override
+  public Void call() throws Exception {
+try (ScmClient scmClient = parent.createScmClient()) {
+  for (HddsProtos.NodeState state : stateArray) {
+List nodes = scmClient.queryNode(state,
+HddsProtos.QueryScope.CLUSTER, "");
+if (nodes != null && nodes.size() > 0) {
+  // show node state
+  System.out.println("State = " + state.toString());
 
 Review comment:
   Thanks @ChenSammi  for the patch. It LGTM, +1 pending CI. 
   One question: should we have the option to organize the node based on 
topology in addition to the state? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on a change in pull request #905: HDDS-1508. Provide example k8s deployment files for the new CSI server

2019-06-05 Thread GitBox
elek commented on a change in pull request #905: HDDS-1508. Provide example k8s 
deployment files for the new CSI server
URL: https://github.com/apache/hadoop/pull/905#discussion_r290873793
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/k8s/examples/ozone-csi/datanode-daemonset.yaml
 ##
 @@ -0,0 +1,56 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: datanode
+  labels:
+app.kubernetes.io/component: ozone
+spec:
+  selector:
+matchLabels:
+  app: ozone
+  component: datanode
+  template:
+metadata:
+  annotations:
+prometheus.io/scrape: "true"
+prometheus.io/port: "9882"
+prometheus.io/path: /prom
+  labels:
+app: ozone
+component: datanode
+spec:
+  containers:
+  - name: datanode
+image: '@docker.image@'
+args:
+- ozone
+- datanode
+ports:
+- containerPort: 9870
+  name: rpc
+envFrom:
+- configMapRef:
+name: config
+volumeMounts:
+- name: data
+  mountPath: /data
+  initContainers: []
+  volumes:
+  - name: data
+emptyDir: {}
 
 Review comment:
   > And point the location to a maven build directory
   
   Unfortunately kubernetes works in a different way, it's not possible:
   
* There are multiple nodes, you can't just mount one build directory to 
multiple nodes
* The build directory may or may not be on one of the nodes 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao edited a comment on issue #903: HDDS-1490. Support configurable container placement policy through 'o…

2019-06-05 Thread GitBox
xiaoyuyao edited a comment on issue #903: HDDS-1490. Support configurable 
container placement policy through 'o…
URL: https://github.com/apache/hadoop/pull/903#issuecomment-499196069
 
 
   bq. Cannot mock/spy class org.apache.hadoop.ozone.om.OzoneManager
   Mockito cannot mock/spy because :final class
   
   I think this is related to the change in the pom.xml
   As seen in the screen snapshot: ![Screen Shot 2019-06-05 at 11 16 43 
AM](https://user-images.githubusercontent.com/7039184/58979837-cbbae700-8783-11e9-9159-6161593c85ba.png)
   
   we only add the topology related xml files to testResources explicitly, but 
the mokito-extensions from the test/resources is missing, which is need to 
surpress the warning above. I believe if we add the following to the pom should 
fix the issue above.
   

  ${basedir}/src/test/resources

   
   Also, please fix the checkstyle issue in the next commit. 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #802: HADOOP-16279. S3Guard: Implement time-based (TTL) expiry for entries …

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #802: HADOOP-16279. S3Guard: Implement 
time-based (TTL) expiry for entries …
URL: https://github.com/apache/hadoop/pull/802#issuecomment-499198455
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 55 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1154 | trunk passed |
   | +1 | compile | 1134 | trunk passed |
   | +1 | checkstyle | 143 | trunk passed |
   | +1 | mvnsite | 120 | trunk passed |
   | +1 | shadedclient | 980 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 88 | trunk passed |
   | 0 | spotbugs | 65 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 197 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 80 | the patch passed |
   | +1 | compile | 1099 | the patch passed |
   | +1 | javac | 1099 | the patch passed |
   | -0 | checkstyle | 144 | root: The patch generated 4 new + 49 unchanged - 3 
fixed = 53 total (was 52) |
   | +1 | mvnsite | 117 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 681 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 89 | the patch passed |
   | +1 | findbugs | 211 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 563 | hadoop-common in the patch passed. |
   | +1 | unit | 283 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 7215 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/802 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 184fb65dde2f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/14/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/14/testReport/ |
   | Max. process+thread count | 1385 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/14/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-06-05 Thread GitBox
hanishakoneru commented on a change in pull request #877: HDDS-1618. Merge code 
for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#discussion_r290492749
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMVolumeCreateResponse.java
 ##
 @@ -66,5 +66,16 @@ public OmVolumeArgs getOmVolumeArgs() {
 return omVolumeArgs;
   }
 
+  @Override
+  public void addResponseToOMDB(OMMetadataManager omMetadataManager)
+  throws IOException {
+String dbVolumeKey =
+omMetadataManager.getVolumeKey(omVolumeArgs.getVolume());
+String dbUserKey =
+omMetadataManager.getUserKey(omVolumeArgs.getOwnerName());
+
+omMetadataManager.getVolumeTable().put(dbVolumeKey, omVolumeArgs);
+omMetadataManager.getUserTable().put(dbUserKey, volumeList);
 
 Review comment:
   Does the volumeList include the previous volumes owned by the user or just 
the current volume?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-06-05 Thread GitBox
hanishakoneru commented on a change in pull request #877: HDDS-1618. Merge code 
for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#discussion_r290489801
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMClientResponse.java
 ##
 @@ -53,5 +53,16 @@ public OMResponse getOMResponse() {
 return omResponse;
   }
 
+
+  /**
+   * For Non-HA add response to OM DB. As for Non-HA we cannot use double
+   * buffer and add response to cache and then return response to the client,
+   * as when flush is missed in HA, ratis has provided guaranty to apply the
+   * transactions again. In Non-HA, we cannot use the same model, so we need
 
 Review comment:
   This is confusing. Can you please break it down.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-06-05 Thread GitBox
hanishakoneru commented on a change in pull request #877: HDDS-1618. Merge code 
for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#discussion_r290872149
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
 ##
 @@ -99,7 +106,36 @@ public OMResponse submitRequest(RpcController controller,
   return submitRequestToRatis(request);
 }
   } else {
-return submitRequestDirectlyToOM(request);
+try {
+  OMClientRequest omClientRequest =
+  OzoneManagerRatisUtils.createClientRequest(request);
+  if (omClientRequest != null) {
+request = omClientRequest.preExecute(ozoneManager);
+  } else {
+// Still work is ongoing, so for some of the requests still
+// following older approach.
+return submitRequestDirectlyToOM(request);
+  }
+} catch (IOException ex) {
+  // As some of the preExecute returns error. So handle here.
+  return createErrorResponse(request, ex);
+}
+OMClientRequest omClientRequest = OzoneManagerRatisUtils
+.createClientRequest(request);
+
 
 Review comment:
   Why do we need to create ClientRequest again here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-06-05 Thread GitBox
hanishakoneru commented on a change in pull request #877: HDDS-1618. Merge code 
for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#discussion_r290495002
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
 ##
 @@ -52,6 +54,10 @@
   private final RequestHandler handler;
   private final boolean isRatisEnabled;
   private final OzoneManager ozoneManager;
+  private final OMMetadataManager omMetadataManager;
+  // Used during Non-HA when calling validateAndUpdateCache methods in
+  // OMClientRequest. As in Non-HA with out ratis, we don't use this.
 
 Review comment:
   Comment is contradictory. Do we use it during HA or non-HA?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-06-05 Thread GitBox
hanishakoneru commented on a change in pull request #877: HDDS-1618. Merge code 
for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#discussion_r290493740
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMVolumeDeleteResponse.java
 ##
 @@ -59,5 +59,18 @@ public void addToDBBatch(OMMetadataManager 
omMetadataManager,
 omMetadataManager.getVolumeKey(volume));
   }
 
+  @Override
+  public void addResponseToOMDB(OMMetadataManager omMetadataManager)
+  throws IOException {
+String dbUserKey = omMetadataManager.getUserKey(owner);
+VolumeList volumeList = updatedVolumeList;
 
 Review comment:
   We can directly use updatedVolumeList instead of copying it to a local 
variable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #909: HDDS-1645 Change the version of Pico CLI to the latest 3.x release - 3.9.6

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #909: HDDS-1645 Change the version of Pico CLI 
to the latest 3.x release - 3.9.6
URL: https://github.com/apache/hadoop/pull/909#issuecomment-499196560
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 503 | trunk passed |
   | +1 | compile | 287 | trunk passed |
   | +1 | checkstyle | 90 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 890 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 181 | trunk passed |
   | 0 | spotbugs | 330 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 517 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for patch |
   | +1 | mvninstall | 467 | the patch passed |
   | +1 | compile | 305 | the patch passed |
   | +1 | javac | 305 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 688 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | +1 | findbugs | 543 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 226 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1665 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 6950 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-909/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/909 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 6272fdf82795 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-909/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-909/2/testReport/ |
   | Max. process+thread count | 5007 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-909/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #911: HADOOP-16344 Make DurationInfo "public unstable

2019-06-05 Thread GitBox
hadoop-yetus commented on issue #911: HADOOP-16344 Make DurationInfo "public 
unstable
URL: https://github.com/apache/hadoop/pull/911#issuecomment-499196385
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1091 | trunk passed |
   | +1 | compile | 1061 | trunk passed |
   | +1 | checkstyle | 37 | trunk passed |
   | +1 | mvnsite | 76 | trunk passed |
   | +1 | shadedclient | 754 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 59 | trunk passed |
   | 0 | spotbugs | 117 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 114 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 24 | hadoop-common in the patch failed. |
   | -1 | compile | 46 | root in the patch failed. |
   | -1 | javac | 46 | root in the patch failed. |
   | -0 | checkstyle | 30 | hadoop-common-project/hadoop-common: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) |
   | -1 | mvnsite | 26 | hadoop-common in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedclient | 32 | patch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 45 | hadoop-common-project_hadoop-common generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) |
   | -1 | findbugs | 24 | hadoop-common in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-common in the patch failed. |
   | +1 | asflicense | 23 | The patch does not generate ASF License warnings. |
   | | | 3547 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-911/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/911 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5874f3c47329 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-911/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-911/1/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-911/1/artifact/out/patch-compile-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-911/1/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-911/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-911/1/artifact/out/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-911/1/artifact/out/patch-findbugs-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-911/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-911/1/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-911/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional 

[GitHub] [hadoop] xiaoyuyao commented on issue #903: HDDS-1490. Support configurable container placement policy through 'o…

2019-06-05 Thread GitBox
xiaoyuyao commented on issue #903: HDDS-1490. Support configurable container 
placement policy through 'o…
URL: https://github.com/apache/hadoop/pull/903#issuecomment-499196069
 
 
   bq. Cannot mock/spy class org.apache.hadoop.ozone.om.OzoneManager
   Mockito cannot mock/spy because :final class
   
   I think this is related to the change in the pom.xml where we only add the 
topology related xml files to testResources explicitly. In this case, I think 
we will need to add the mokito-extensions from the test/resources as well 
explicitly. This is need to surpress the warning above. 
   

  ${basedir}/src/test/resources

   
   Also, please fix the checkstyle issue in the next commit. 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #703: HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.

2019-06-05 Thread GitBox
hanishakoneru commented on issue #703: HDDS-1371. Download RocksDB checkpoint 
from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#issuecomment-499195295
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg closed pull request #802: HADOOP-16279. S3Guard: Implement time-based (TTL) expiry for entries …

2019-06-05 Thread GitBox
bgaborg closed pull request #802: HADOOP-16279. S3Guard: Implement time-based 
(TTL) expiry for entries …
URL: https://github.com/apache/hadoop/pull/802
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg opened a new pull request #802: HADOOP-16279. S3Guard: Implement time-based (TTL) expiry for entries …

2019-06-05 Thread GitBox
bgaborg opened a new pull request #802: HADOOP-16279. S3Guard: Implement 
time-based (TTL) expiry for entries …
URL: https://github.com/apache/hadoop/pull/802
 
 
   …(and tombstones)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #879: HADOOP-15563 S3Guard to create on-demand DDB tables

2019-06-05 Thread GitBox
steveloughran commented on issue #879: HADOOP-15563 S3Guard to create on-demand 
DDB tables
URL: https://github.com/apache/hadoop/pull/879#issuecomment-499179179
 
 
   Update: we do need the per-test setting of the capacity, else your created 
tables have numbers. Assumption: FS caching may complicate things: setting the 
capacity on the new FS doesn't get picked up if another FS is used. 
   Factoring out the getFS.getConf() + patch routine, and also calling clone() 
so that each tests' changes are isolated from the others. Without that (as we 
are today) one test could interfere with its successor.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] eyanghwx commented on a change in pull request #905: HDDS-1508. Provide example k8s deployment files for the new CSI server

2019-06-05 Thread GitBox
eyanghwx commented on a change in pull request #905: HDDS-1508. Provide example 
k8s deployment files for the new CSI server
URL: https://github.com/apache/hadoop/pull/905#discussion_r290851056
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/k8s/examples/ozone-csi/datanode-daemonset.yaml
 ##
 @@ -0,0 +1,56 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: datanode
+  labels:
+app.kubernetes.io/component: ozone
+spec:
+  selector:
+matchLabels:
+  app: ozone
+  component: datanode
+  template:
+metadata:
+  annotations:
+prometheus.io/scrape: "true"
+prometheus.io/port: "9882"
+prometheus.io/path: /prom
+  labels:
+app: ozone
+component: datanode
+spec:
+  containers:
+  - name: datanode
+image: '@docker.image@'
+args:
+- ozone
+- datanode
+ports:
+- containerPort: 9870
+  name: rpc
+envFrom:
+- configMapRef:
+name: config
+volumeMounts:
+- name: data
+  mountPath: /data
+  initContainers: []
+  volumes:
+  - name: data
+emptyDir: {}
 
 Review comment:
   HostPath would be the right approach, and point the location to a maven 
build directory.  This will allow maven to clean up the state when data is no 
longer needed.  This also prevent accidental deletion of user data in default 
setting.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shwetayakkali opened a new pull request #912: HDDS-1201. Reporting Corruptions in Containers to SCM

2019-06-05 Thread GitBox
shwetayakkali opened a new pull request #912: HDDS-1201. Reporting Corruptions 
in Containers to SCM
URL: https://github.com/apache/hadoop/pull/912
 
 
   Change-Id: I767ecfe4f27729955ca41b5f634400742a49bbbd
   
   Add protocol message and handling to report container corruptions to the SCM.
   Also add basic recovery handling in SCM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pingsutw opened a new pull request #911: HADOOP-16344 Make DurationInfo "public unstable

2019-06-05 Thread GitBox
pingsutw opened a new pull request #911: HADOOP-16344 Make DurationInfo "public 
unstable
URL: https://github.com/apache/hadoop/pull/911
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-06-05 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16856861#comment-16856861
 ] 

Eric Yang commented on HADOOP-16314:


+1 looks good to me.  Will commit if no objections by end of day.

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16314-001.patch, HADOOP-16314-002.patch, 
> HADOOP-16314-003.patch, HADOOP-16314-004.patch, HADOOP-16314-005.patch, 
> HADOOP-16314-006.patch, HADOOP-16314-007.patch, Hadoop Web Security.xlsx, 
> scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #879: HADOOP-15563 S3Guard to create on-demand DDB tables

2019-06-05 Thread GitBox
steveloughran commented on issue #879: HADOOP-15563 S3Guard to create on-demand 
DDB tables
URL: https://github.com/apache/hadoop/pull/879#issuecomment-499166626
 
 
   BTW, HADOOP-15183 will be fixing the ITestDynamoDBMetadataStoreScale tests 
to move from verifying that DDB throttling is recovered from to becoming tests 
of the scalability of how our client uses DDB. Which will remove the current 
assume(!ondemand) change of HADOOP-16118. Not relevant for this patch except to 
explain why I'm not doing anything with that test in this one


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard to create on-demand DDB tables

2019-06-05 Thread GitBox
steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard 
to create on-demand DDB tables
URL: https://github.com/apache/hadoop/pull/879#discussion_r290836505
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java
 ##
 @@ -274,7 +300,9 @@ public void testInitialize() throws IOException {
 getTestTableName("testInitialize");
 final Configuration conf = s3afs.getConf();
 conf.set(S3GUARD_DDB_TABLE_NAME_KEY, tableName);
-try (DynamoDBMetadataStore ddbms = new DynamoDBMetadataStore()) {
+enableOnDemand(conf);
 
 Review comment:
   I'll be doing it in the FS configuration and the one for the static 
metastore -they are different configurations


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard to create on-demand DDB tables

2019-06-05 Thread GitBox
steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard 
to create on-demand DDB tables
URL: https://github.com/apache/hadoop/pull/879#discussion_r290832435
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java
 ##
 @@ -274,7 +300,9 @@ public void testInitialize() throws IOException {
 getTestTableName("testInitialize");
 final Configuration conf = s3afs.getConf();
 conf.set(S3GUARD_DDB_TABLE_NAME_KEY, tableName);
-try (DynamoDBMetadataStore ddbms = new DynamoDBMetadataStore()) {
+enableOnDemand(conf);
 
 Review comment:
   the beforeClass is for the static ddbms; the others were for the new ones


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard to create on-demand DDB tables

2019-06-05 Thread GitBox
steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard 
to create on-demand DDB tables
URL: https://github.com/apache/hadoop/pull/879#discussion_r290832301
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java
 ##
 @@ -274,7 +300,9 @@ public void testInitialize() throws IOException {
 getTestTableName("testInitialize");
 final Configuration conf = s3afs.getConf();
 conf.set(S3GUARD_DDB_TABLE_NAME_KEY, tableName);
-try (DynamoDBMetadataStore ddbms = new DynamoDBMetadataStore()) {
+enableOnDemand(conf);
 
 Review comment:
   it's done in the beforeClass for the static table; for the others its done 
to to the config which comes from the FS.
   I'm going to change it to set it in the FS config before we set the FS. If 
someone has per-bucket options to set the capacity then things will be 
different, but I'd be surprised if anyone ever does that


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard to create on-demand DDB tables

2019-06-05 Thread GitBox
steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard 
to create on-demand DDB tables
URL: https://github.com/apache/hadoop/pull/879#discussion_r290829614
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/DDBCapacities.java
 ##
 @@ -82,7 +79,7 @@ public String toString() {
   }
 
   /**
-   * Is the the capacity that of a pay-on-demand table?
+   * Is the the capacity that of a pay-per-request table?
 
 Review comment:
   The AWS docs say "on demand" or "pay by request"; I was trying to be 
consistent but gave up. This is a leftover from an attempt/revert of actually 
changing the method name. Changing to "on demand" (and not pay-on-demand)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard to create on-demand DDB tables

2019-06-05 Thread GitBox
steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard 
to create on-demand DDB tables
URL: https://github.com/apache/hadoop/pull/879#discussion_r290828199
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
 ##
 @@ -434,7 +434,8 @@ public abstract int run(String[] args, PrintStream out) 
throws Exception,
 "\n" +
 "  URLs for Amazon DynamoDB are of the form dynamodb://TABLE_NAME.\n" +
 "  Specifying both the -" + REGION_FLAG + " option and an S3A path\n" +
-"  is not supported.";
+"  is not supported.\n"
++ "To create a table with per-request billing, set the read and write 
capaciies to 0";
 
 Review comment:
   fixed; also splitting the line in both the IDE and in the printed text


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard to create on-demand DDB tables

2019-06-05 Thread GitBox
steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard 
to create on-demand DDB tables
URL: https://github.com/apache/hadoop/pull/879#discussion_r290827559
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -465,36 +463,45 @@ private Constants() {
* For example:
* fs.s3a.s3guard.ddb.table.tag.mytag
*/
-  @InterfaceStability.Unstable
   public static final String S3GUARD_DDB_TABLE_TAG =
   "fs.s3a.s3guard.ddb.table.tag.";
 
-  /**
 
 Review comment:
   +1. I'm thinking we also need to split internal from public


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard to create on-demand DDB tables

2019-06-05 Thread GitBox
steveloughran commented on a change in pull request #879: HADOOP-15563 S3Guard 
to create on-demand DDB tables
URL: https://github.com/apache/hadoop/pull/879#discussion_r290827288
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -445,7 +445,6 @@ private Constants() {
* This config has no default value. If the user does not set this, the
* S3Guard will operate table in the associated S3 bucket region.
*/
-  @InterfaceStability.Unstable
 
 Review comment:
   yeah, its time


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >