[GitHub] [hadoop] dineshchitlangia opened a new pull request #1204: HDDS-1768. Audit xxxAcl methods in OzoneManager

2019-07-31 Thread GitBox
dineshchitlangia opened a new pull request #1204: HDDS-1768. Audit xxxAcl 
methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204
 
 
   @xiaoyuyao , @ajayydv - Request you to please review this PR. Thank you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dchitlangia closed pull request #1203: HDDS-1768. Audit xxxAcl methods in OzoneManager

2019-07-31 Thread GitBox
dchitlangia closed pull request #1203: HDDS-1768. Audit xxxAcl methods in 
OzoneManager
URL: https://github.com/apache/hadoop/pull/1203
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dchitlangia opened a new pull request #1203: HDDS-1768. Audit xxxAcl methods in OzoneManager

2019-07-31 Thread GitBox
dchitlangia opened a new pull request #1203: HDDS-1768. Audit xxxAcl methods in 
OzoneManager
URL: https://github.com/apache/hadoop/pull/1203
 
 
   @xiaoyuyao , @ajayydv - Request you to please review this PR. Thank you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1147: HDDS-1619. Support volume acl operations for OM HA. Contributed by…

2019-07-31 Thread GitBox
bharatviswa504 commented on a change in pull request #1147: HDDS-1619. Support 
volume acl operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#discussion_r309528209
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/TestOMVolumeAddAclRequest.java
 ##
 @@ -0,0 +1,123 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.helpers.OmOzoneAclMap;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.TestOMRequestUtils;
+import org.apache.hadoop.ozone.om.request.volume.acl.OMVolumeAddAclRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.UUID;
+
 
 Review comment:
   all acl tests should be in package volume/acl like same as in source.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1147: HDDS-1619. Support volume acl operations for OM HA. Contributed by…

2019-07-31 Thread GitBox
bharatviswa504 commented on a change in pull request #1147: HDDS-1619. Support 
volume acl operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#discussion_r309527672
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java
 ##
 @@ -70,6 +70,7 @@ public OMResponse handleApplyTransaction(OMRequest omRequest,
 case CreateS3Bucket:
 case DeleteS3Bucket:
 case InitiateMultiPartUpload:
+case AddAcl:
 
 Review comment:
   Yes, But when addAcl/removeAcl/setAcl comes for bucket which is not handled, 
so for them we should call normal handler.handle. So, here we need to have 
below code
   `
   if (omClientRequest != null) { OMClientResponse omClientResponse =
 omClientRequest.validateAndUpdateCache(getOzoneManager(),
 transactionLogIndex, ozoneManagerDoubleBuffer::add);
   } else {
   return handle(omClientRequest);
   }`
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1147: HDDS-1619. Support volume acl operations for OM HA. Contributed by…

2019-07-31 Thread GitBox
bharatviswa504 commented on a change in pull request #1147: HDDS-1619. Support 
volume acl operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#discussion_r309527672
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java
 ##
 @@ -70,6 +70,7 @@ public OMResponse handleApplyTransaction(OMRequest omRequest,
 case CreateS3Bucket:
 case DeleteS3Bucket:
 case InitiateMultiPartUpload:
+case AddAcl:
 
 Review comment:
   Yes, But when addAcl/removeAcl/setAcl comes for bucket which is not handled, 
so for them we should call normal handler.handle. So, here we need to have 
below code
   `
   if (omClientRequest != null) { OMClientResponse omClientResponse =
 omClientRequest.validateAndUpdateCache(getOzoneManager(),
 transactionLogIndex, ozoneManagerDoubleBuffer::add);
   } else {
   return handler.handle
   }`
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1147: HDDS-1619. Support volume acl operations for OM HA. Contributed by…

2019-07-31 Thread GitBox
bharatviswa504 commented on a change in pull request #1147: HDDS-1619. Support 
volume acl operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#discussion_r309526798
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAddAclRequest.java
 ##
 @@ -0,0 +1,103 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.request.volume.acl;
+
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import org.apache.hadoop.hdds.scm.storage.CheckedBiFunction;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeAclOpResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Handles volume add acl request.
+ */
+public class OMVolumeAddAclRequest extends OMVolumeAclRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeAddAclRequest.class);
+
+  private static CheckedBiFunction,
+  OmVolumeArgs, IOException> volumeAddAclOp;
+
+  static {
+volumeAddAclOp = (acls, volArgs) -> volArgs.addAcl(acls.get(0));
+  }
+
+  private List ozoneAcls;
+  private String volumeName;
+
+  public OMVolumeAddAclRequest(OMRequest omRequest) {
+super(omRequest, volumeAddAclOp);
+OzoneManagerProtocolProtos.AddAclRequest addAclRequest =
+getOmRequest().getAddAclRequest();
+Preconditions.checkNotNull(addAclRequest);
+ozoneAcls = Lists.newArrayList(
+OzoneAcl.fromProtobuf(addAclRequest.getAcl()));
+volumeName = addAclRequest.getObj().getPath().substring(1);
+  }
+
+  @Override
+  public List getAcls() {
+return ozoneAcls;
+  }
+
+  @Override
+  public String getVolumeName() {
+return volumeName;
+  }
+
+  private OzoneAcl getAcl() {
+return ozoneAcls.get(0);
+  }
+
+
+  @Override
+  OMResponse.Builder onInit() {
+return OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.AddAcl)
+.setStatus(OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+  }
+
+  @Override
+  OMClientResponse onSuccess(OMResponse.Builder omResponse,
+  OmVolumeArgs omVolumeArgs){
+LOG.debug("Add acl: {} to volume: {} success!",
+getAcl(), getVolumeName());
+omResponse.setAddAclResponse(OzoneManagerProtocolProtos.AddAclResponse
+.newBuilder().setResponse(true).build());
+return new OMVolumeAclOpResponse(omVolumeArgs, omResponse.build());
+  }
+
+  @Override
+  OMClientResponse onFailure(OMResponse.Builder omResponse,
+  IOException ex) {
+LOG.error("Add acl {} to volume {} failed!",
 
 Review comment:
   I liked this approach, code looks cleaner.
   Can we move logging outside of these onSuccess and OnFailure. As these are 
called with lock held.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16479) ABFS FileStatus.getModificationTime returns localized time instead of UTC

2019-07-31 Thread Bilahari T H (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-16479:
-

Assignee: Bilahari T H

> ABFS FileStatus.getModificationTime returns localized time instead of UTC
> -
>
> Key: HADOOP-16479
> URL: https://issues.apache.org/jira/browse/HADOOP-16479
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Joan Sala Reixach
>Assignee: Bilahari T H
>Priority: Major
> Attachments: image-2019-07-31-18-21-53-023.png, 
> image-2019-07-31-18-23-37-349.png
>
>
> As per javadoc, the method FileStatus.getModificationTime() should return the 
> time in UTC, but it returns the time in the JVM timezone.
> The issue origins in AzureBlobFileSystemStore.getFileStatus() itself, since  
> parseLastModifiedTime() returns a wrong date. I have created a file in Azure 
> Data Lake Gen2 and when I look at  it through the Azure Explorer it shows the 
> correct modification time, but the method returns -2 hours time (I am in CET 
> = UTC+2).
> Azure Explorer last modified time:
> !image-2019-07-31-18-21-53-023.png|width=460,height=45!
> AbfsClient parseLastModifiedTime:
> !image-2019-07-31-18-23-37-349.png|width=459,height=284!
> It shows 15:21 CEST as utcDate, when it should be 15:21 UTC, which results in 
> the 2 hour loss.
> DateFormat.parse uses a localized calendar to parse dates which might be the 
> source of the issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on issue #1187: HDDS-1829 On OM reload/restart OmMetrics#numKeys should be updated

2019-07-31 Thread GitBox
smengcl commented on issue #1187: HDDS-1829 On OM reload/restart 
OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1187#issuecomment-517117340
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1194: Hdds 1879. Support multiple excluded scopes when choosing datanodes in NetworkTopology

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1194: Hdds 1879.  Support multiple excluded 
scopes when choosing datanodes in NetworkTopology
URL: https://github.com/apache/hadoop/pull/1194#issuecomment-517116073
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 141 | Maven dependency ordering for branch |
   | +1 | mvninstall | 727 | trunk passed |
   | +1 | compile | 359 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1055 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 459 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 685 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 566 | the patch passed |
   | +1 | compile | 390 | the patch passed |
   | +1 | javac | 390 | the patch passed |
   | +1 | checkstyle | 74 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 737 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 89 | hadoop-hdds generated 1 new + 16 unchanged - 0 fixed = 
17 total (was 16) |
   | -1 | findbugs | 266 | hadoop-hdds generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 354 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2055 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 8613 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  org.apache.hadoop.hdds.scm.net.InnerNodeImpl.getLeaf(int, List, 
Collection, int) makes inefficient use of keySet iterator instead of entrySet 
iterator  At InnerNodeImpl.java:inefficient use of keySet iterator instead of 
entrySet iterator  At InnerNodeImpl.java:[line 340] |
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1194 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ab6a97182374 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c1f7440 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/2/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/2/artifact/out/new-findbugs-hadoop-hdds.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/2/testReport/ |
   | Max. process+thread count | 4376 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1194/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hadoop] anuengineer edited a comment on issue #1201: HDDS-1788. Add kerberos support to Ozone Recon

2019-07-31 Thread GitBox
anuengineer edited a comment on issue #1201: HDDS-1788. Add kerberos support to 
Ozone Recon
URL: https://github.com/apache/hadoop/pull/1201#issuecomment-517107368
 
 
   I am not sure I understand the patch well enough. @xiaoyuyao @arp7  @elek  
can you guys please take a look ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1201: HDDS-1788. Add kerberos support to Ozone Recon

2019-07-31 Thread GitBox
anuengineer commented on issue #1201: HDDS-1788. Add kerberos support to Ozone 
Recon
URL: https://github.com/apache/hadoop/pull/1201#issuecomment-517107368
 
 
   I am not sure I understand the patch well enough. @xiaoyuyao @arp7  can you 
guys please take a look ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1147: HDDS-1619. Support volume acl operations for OM HA. Contributed by…

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1147: HDDS-1619. Support volume acl operations 
for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-517103330
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 164 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 717 | trunk passed |
   | +1 | compile | 410 | trunk passed |
   | +1 | checkstyle | 92 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 975 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | trunk passed |
   | 0 | spotbugs | 441 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 636 | trunk passed |
   | -0 | patch | 481 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 560 | the patch passed |
   | +1 | compile | 417 | the patch passed |
   | +1 | javac | 417 | the patch passed |
   | -0 | checkstyle | 39 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 717 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 666 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 350 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1883 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 8180 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1147 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e1345a14a265 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b008072 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/8/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/8/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/8/testReport/ |
   | Max. process+thread count | 4027 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1202: HDDS-1884. Support Bucket addACL operations for OM HA.

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1202: HDDS-1884. Support Bucket addACL 
operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-517102013
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 629 | trunk passed |
   | +1 | compile | 386 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 945 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | trunk passed |
   | 0 | spotbugs | 464 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 674 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 576 | the patch passed |
   | +1 | compile | 372 | the patch passed |
   | +1 | cc | 372 | the patch passed |
   | +1 | javac | 372 | the patch passed |
   | +1 | checkstyle | 76 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 755 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | -1 | findbugs | 491 | hadoop-ozone generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 358 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2010 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 62 | The patch does not generate ASF License warnings. |
   | | | 8296 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Useless object stored in variable newAcls of method 
org.apache.hadoop.ozone.om.helpers.OmBucketInfo.addAcl(OzoneAcl)  At 
OmBucketInfo.java:newAcls of method 
org.apache.hadoop.ozone.om.helpers.OmBucketInfo.addAcl(OzoneAcl)  At 
OmBucketInfo.java:[line 161] |
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 4cd8f33ab1ac 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b008072 |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/1/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/1/testReport/ |
   | Max. process+thread count | 4256 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-

[GitHub] [hadoop] hadoop-yetus commented on issue #1202: HDDS-1884. Support Bucket addACL operations for OM HA.

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1202: HDDS-1884. Support Bucket addACL 
operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-517101767
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 119 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for branch |
   | +1 | mvninstall | 614 | trunk passed |
   | +1 | compile | 375 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 945 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 431 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 625 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 573 | the patch passed |
   | +1 | compile | 381 | the patch passed |
   | +1 | cc | 381 | the patch passed |
   | +1 | javac | 381 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 726 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | -1 | findbugs | 438 | hadoop-ozone generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 342 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1878 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 7964 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Useless object stored in variable newAcls of method 
org.apache.hadoop.ozone.om.helpers.OmBucketInfo.addAcl(OzoneAcl)  At 
OmBucketInfo.java:newAcls of method 
org.apache.hadoop.ozone.om.helpers.OmBucketInfo.addAcl(OzoneAcl)  At 
OmBucketInfo.java:[line 161] |
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 8cb2937ff193 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b008072 |
   | Default Java | 1.8.0_222 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/2/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/2/testReport/ |
   | Max. process+thread count | 4584 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] hadoop-yetus commented on issue #1202: HDDS-1884. Support Bucket addACL operations for OM HA.

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1202: HDDS-1884. Support Bucket addACL 
operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202#issuecomment-517098756
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 79 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for branch |
   | +1 | mvninstall | 662 | trunk passed |
   | +1 | compile | 385 | trunk passed |
   | +1 | checkstyle | 69 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 846 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 453 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 657 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 602 | the patch passed |
   | +1 | compile | 399 | the patch passed |
   | +1 | cc | 399 | the patch passed |
   | +1 | javac | 399 | the patch passed |
   | +1 | checkstyle | 72 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 660 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | -1 | findbugs | 470 | hadoop-ozone generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 332 | hadoop-hdds in the patch failed. |
   | -1 | unit | 258 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 6299 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Useless object stored in variable newAcls of method 
org.apache.hadoop.ozone.om.helpers.OmBucketInfo.addAcl(OzoneAcl)  At 
OmBucketInfo.java:newAcls of method 
org.apache.hadoop.ozone.om.helpers.OmBucketInfo.addAcl(OzoneAcl)  At 
OmBucketInfo.java:[line 161] |
   | Failed junit tests | 
hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1202 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 4394f0f05310 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b008072 |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/3/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/3/testReport/ |
   | Max. process+thread count | 533 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1202/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on issue #1198: HDFS-14034: Support getQuotaUsage API in WebHDFS

2019-07-31 Thread GitBox
sunchao commented on issue #1198: HDFS-14034: Support getQuotaUsage API in 
WebHDFS
URL: https://github.com/apache/hadoop/pull/1198#issuecomment-517096919
 
 
   @goiri do you happen to know why the CI (for branch-2) is not triggering for 
this? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1201: HDDS-1788. Add kerberos support to Ozone Recon

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1201: HDDS-1788. Add kerberos support to Ozone 
Recon
URL: https://github.com/apache/hadoop/pull/1201#issuecomment-517094193
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 98 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 73 | Maven dependency ordering for branch |
   | +1 | mvninstall | 659 | trunk passed |
   | +1 | compile | 431 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 968 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 208 | trunk passed |
   | 0 | spotbugs | 454 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 691 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 53 | Maven dependency ordering for patch |
   | +1 | mvninstall | 600 | the patch passed |
   | +1 | compile | 409 | the patch passed |
   | +1 | javac | 409 | the patch passed |
   | +1 | checkstyle | 94 | the patch passed |
   | +1 | hadolint | 3 | There were no new hadolint issues. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 745 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 186 | the patch passed |
   | +1 | findbugs | 702 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 357 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1968 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 8657 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1201/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1201 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml hadolint shellcheck shelldocs yamllint findbugs 
checkstyle |
   | uname | Linux ccaf957e3cb3 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b008072 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1201/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1201/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1201/1/testReport/ |
   | Max. process+thread count | 4257 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/dist hadoop-ozone/ozone-recon 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1201/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hadoop] bharatviswa504 merged pull request #1199: HDDS-1885. Fix bug in checkAcls in OzoneManager.

2019-07-31 Thread GitBox
bharatviswa504 merged pull request #1199: HDDS-1885. Fix bug in checkAcls in 
OzoneManager.
URL: https://github.com/apache/hadoop/pull/1199
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1199: HDDS-1885. Fix bug in checkAcls in OzoneManager.

2019-07-31 Thread GitBox
bharatviswa504 commented on issue #1199: HDDS-1885. Fix bug in checkAcls in 
OzoneManager.
URL: https://github.com/apache/hadoop/pull/1199#issuecomment-517087824
 
 
   Test failures and check styles are not related to this patch.
   Thank You @arp7 for the review. I will commit this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1147: HDDS-1619. Support volume acl operations for OM HA. Contributed by…

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1147: HDDS-1619. Support volume acl operations 
for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-517078998
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 138 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 769 | trunk passed |
   | +1 | compile | 436 | trunk passed |
   | +1 | checkstyle | 92 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1109 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 217 | trunk passed |
   | 0 | spotbugs | 528 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 776 | trunk passed |
   | -0 | patch | 582 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 732 | the patch passed |
   | +1 | compile | 433 | the patch passed |
   | +1 | javac | 433 | the patch passed |
   | -0 | checkstyle | 53 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 854 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 201 | the patch passed |
   | +1 | findbugs | 734 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 381 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2544 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 9750 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.TestContainerOperations |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1147 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux dc459edc1d0b 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b008072 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/7/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/7/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/7/testReport/ |
   | Max. process+thread count | 3941 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] hadoop-yetus commented on issue #1200: HDDS-1832 : Improve logging for PipelineActions handling in SCM and datanode.

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1200: HDDS-1832 : Improve logging for 
PipelineActions handling in SCM and datanode.
URL: https://github.com/apache/hadoop/pull/1200#issuecomment-517077144
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 107 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 633 | trunk passed |
   | +1 | compile | 399 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1031 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | trunk passed |
   | 0 | spotbugs | 433 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 631 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 559 | the patch passed |
   | +1 | compile | 374 | the patch passed |
   | +1 | javac | 374 | the patch passed |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 736 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 186 | the patch passed |
   | +1 | findbugs | 719 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 362 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1965 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 8262 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1200/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1200 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 52369254f8ef 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b008072 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1200/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1200/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1200/1/testReport/ |
   | Max. process+thread count | 3922 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1200/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1202: HDDS-1884. Support Bucket addACL operations for OM HA.

2019-07-31 Thread GitBox
bharatviswa504 opened a new pull request #1202: HDDS-1884. Support Bucket 
addACL operations for OM HA.
URL: https://github.com/apache/hadoop/pull/1202
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1147: HDDS-1619. Support volume acl operations for OM HA. Contributed by…

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1147: HDDS-1619. Support volume acl operations 
for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-517072340
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 151 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 829 | trunk passed |
   | +1 | compile | 497 | trunk passed |
   | +1 | checkstyle | 92 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1038 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 433 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 631 | trunk passed |
   | -0 | patch | 475 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 555 | the patch passed |
   | +1 | compile | 375 | the patch passed |
   | +1 | javac | 375 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 728 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | the patch passed |
   | +1 | findbugs | 650 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 330 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2710 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 9205 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1147 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 191ee7e7469d 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b008072 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/5/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/5/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/5/testReport/ |
   | Max. process+thread count | 5401 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #1200: HDDS-1832 : Improve logging for PipelineActions handling in SCM and datanode.

2019-07-31 Thread GitBox
vivekratnavel commented on issue #1200: HDDS-1832 : Improve logging for 
PipelineActions handling in SCM and datanode.
URL: https://github.com/apache/hadoop/pull/1200#issuecomment-517069428
 
 
   /label ozone
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #1200: HDDS-1832 : Improve logging for PipelineActions handling in SCM and datanode.

2019-07-31 Thread GitBox
vivekratnavel commented on issue #1200: HDDS-1832 : Improve logging for 
PipelineActions handling in SCM and datanode.
URL: https://github.com/apache/hadoop/pull/1200#issuecomment-517069334
 
 
   +1 LGTM (non-binding).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1147: HDDS-1619. Support volume acl operations for OM HA. Contributed by…

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1147: HDDS-1619. Support volume acl operations 
for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-517067908
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 630 | trunk passed |
   | +1 | compile | 377 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 856 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 460 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 672 | trunk passed |
   | -0 | patch | 491 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 585 | the patch passed |
   | +1 | compile | 378 | the patch passed |
   | +1 | javac | 378 | the patch passed |
   | -0 | checkstyle | 37 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 642 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | +1 | findbugs | 631 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 289 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1582 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 7338 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1147 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 427416941c56 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b008072 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/6/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/6/testReport/ |
   | Max. process+thread count | 5180 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #1201: HDDS-1788. Add kerberos support to Ozone Recon

2019-07-31 Thread GitBox
vivekratnavel commented on issue #1201: HDDS-1788. Add kerberos support to 
Ozone Recon
URL: https://github.com/apache/hadoop/pull/1201#issuecomment-517067735
 
 
   @swagle @avijayanhwx @anuengineer Please review when you find time. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #1201: HDDS-1788. Add kerberos support to Ozone Recon

2019-07-31 Thread GitBox
vivekratnavel commented on issue #1201: HDDS-1788. Add kerberos support to 
Ozone Recon
URL: https://github.com/apache/hadoop/pull/1201#issuecomment-517067457
 
 
   /label ozone
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel opened a new pull request #1201: HDDS-1788. Add kerberos support to Ozone Recon

2019-07-31 Thread GitBox
vivekratnavel opened a new pull request #1201: HDDS-1788. Add kerberos support 
to Ozone Recon
URL: https://github.com/apache/hadoop/pull/1201
 
 
   Recon fails to come up in a secure cluster with the following error:
   ```
   Failed startup of context 
o.e.j.w.WebAppContext@2009f9b0{/,file:///tmp/jetty-0.0.0.0-9888-recon-_-any-2565178148822292652.dir/webapp/,UNAVAILABLE}{/recon}
 javax.servlet.ServletException: javax.servlet.ServletException: Principal not 
defined in configuration at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:188)
 at
   ```
   
   This patch addresses this issue and enables Recon to come up in clusters 
secured by kerberos. I have manually tested the patch by creating the recon jar 
and replacing an old jar in a live secure CM deployed cluster and verified that 
Recon starts successfully and is able to login successfully with the kerberos 
ticket. Also updated ozonesecure docker-compose file to add recon and verified 
that recon is able to come up successfully. This patch also fixes various typos 
found in other parts of the source code not related to the title of this JIRA.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster

2019-07-31 Thread GitBox
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add 
ability in Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r309410908
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,231 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 10Kb..,10MB,..1PB) to the Recon
 
 Review comment:
   Can we change these ranges to (2KB, 4KB, 8KB, 16KB, ...1PB) ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15681) AuthenticationFilter should generate valid date format for Set-Cookie header regardless of default Locale

2019-07-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897615#comment-16897615
 ] 

Hadoop QA commented on HADOOP-15681:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-15681 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12936239/HADOOP-15681.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 64ff11398e41 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b008072 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16440/testReport/ |
| Max. process+thread count | 413 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16440/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> AuthenticationFilter should generate valid 

[jira] [Commented] (HADOOP-15942) Change the logging level form DEBUG to ERROR for RuntimeErrorException in JMXJsonServlet

2019-07-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897612#comment-16897612
 ] 

Hadoop QA commented on HADOOP-15942:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 25 unchanged - 1 fixed = 25 total (was 26) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
47s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-15942 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961888/HADOOP-15942.trunk.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 477423ab4ad2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b008072 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16439/testReport/ |
| Max. process+thread count | 1349 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16439/console |
| Powered by | Apache Yetus 0.8.0   

[GitHub] [hadoop] avijayanhwx opened a new pull request #1200: HDDS-1832 : Improve logging for PipelineActions handling in SCM and datanode.

2019-07-31 Thread GitBox
avijayanhwx opened a new pull request #1200: HDDS-1832 : Improve logging for 
PipelineActions handling in SCM and datanode.
URL: https://github.com/apache/hadoop/pull/1200
 
 
   Added logging. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16479) ABFS FileStatus.getModificationTime returns localized time instead of UTC

2019-07-31 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-16479:
-
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-15763

> ABFS FileStatus.getModificationTime returns localized time instead of UTC
> -
>
> Key: HADOOP-16479
> URL: https://issues.apache.org/jira/browse/HADOOP-16479
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Joan Sala Reixach
>Priority: Major
> Attachments: image-2019-07-31-18-21-53-023.png, 
> image-2019-07-31-18-23-37-349.png
>
>
> As per javadoc, the method FileStatus.getModificationTime() should return the 
> time in UTC, but it returns the time in the JVM timezone.
> The issue origins in AzureBlobFileSystemStore.getFileStatus() itself, since  
> parseLastModifiedTime() returns a wrong date. I have created a file in Azure 
> Data Lake Gen2 and when I look at  it through the Azure Explorer it shows the 
> correct modification time, but the method returns -2 hours time (I am in CET 
> = UTC+2).
> Azure Explorer last modified time:
> !image-2019-07-31-18-21-53-023.png|width=460,height=45!
> AbfsClient parseLastModifiedTime:
> !image-2019-07-31-18-23-37-349.png|width=459,height=284!
> It shows 15:21 CEST as utcDate, when it should be 15:21 UTC, which results in 
> the 2 hour loss.
> DateFormat.parse uses a localized calendar to parse dates which might be the 
> source of the issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao edited a comment on issue #1147: HDDS-1619. Support volume addACL operations for OM HA. Contributed by…

2019-07-31 Thread GitBox
xiaoyuyao edited a comment on issue #1147: HDDS-1619. Support volume addACL 
operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-517041272
 
 
   bq. To handle Non-HA also with new HA code, we have done some changes in 
HDDS-1856. So, this Jira needs a few more changes like adding to double-buffer 
and setFlushFuture in validateAndUpdateCache.
   
   Thanks for the heads up, @bharatviswa504 . I've rebase the change and also 
added removeAcl and setAcl for volume.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1147: HDDS-1619. Support volume addACL operations for OM HA. Contributed by…

2019-07-31 Thread GitBox
xiaoyuyao commented on issue #1147: HDDS-1619. Support volume addACL operations 
for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-517041272
 
 
   bq. To handle Non-HA also with new HA code, we have done some changes in 
HDDS-1856. So, this Jira needs a few more changes like adding to double-buffer 
and setFlushFuture in validateAndUpdateCache.
   
   Thanks for the heads up. I've rebase the change and also added removeAcl and 
setAcl for volume.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14819) Update commons-net to 3.6

2019-07-31 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned HADOOP-14819:
-

Assignee: kevin su

> Update commons-net to 3.6
> -
>
> Key: HADOOP-14819
> URL: https://issues.apache.org/jira/browse/HADOOP-14819
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Lukas Waldmann
>Assignee: kevin su
>Priority: Major
>
> Please update commons-net to 3.6 as used 3.1 is 6 years old and have several 
> issues with ssl connections



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12282) Connection thread's name should be updated after address changing is detected

2019-07-31 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897556#comment-16897556
 ] 

Wei-Chiu Chuang commented on HADOOP-12282:
--

Make sense to me.

> Connection thread's name should be updated after address changing is detected
> -
>
> Key: HADOOP-12282
> URL: https://issues.apache.org/jira/browse/HADOOP-12282
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: zhouyingchao
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-12282-001.patch, HADOOP-12282.002.patch
>
>
> In a hadoop hdfs cluster, I changed the standby Namenode's ip address (the 
> hostname is not changed and the routing tables are updated). After the 
> change, the cluster is running as normal.
>  However, I found that the debug message of datanode's IPC still prints the 
> original ip address. By looking into the implementation, it turns out that 
> the original address is used as the thread's name. I think the thread's name 
> should be changed if the address change is detected.  Because one of the 
> constituent elements of the thread's name is server.
> {code:java}
> Connection(ConnectionId remoteId, int serviceClass,
> Consumer removeMethod) {
> ..
> UserGroupInformation ticket = remoteId.getTicket();
> // try SASL if security is enabled or if the ugi contains tokens.
> // this causes a SIMPLE client with tokens to attempt SASL
> boolean trySasl = UserGroupInformation.isSecurityEnabled() ||
>   (ticket != null && !ticket.getTokens().isEmpty());
> this.authProtocol = trySasl ? AuthProtocol.SASL : AuthProtocol.NONE;
> this.setName("IPC Client (" + socketFactory.hashCode() +") connection to " +
> server.toString() +
> " from " + ((ticket==null)?"an unknown user":ticket.getUserName()));
> this.setDaemon(true);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15440) Support kerberos principal name pattern for KerberosAuthenticationHandler

2019-07-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897549#comment-16897549
 ] 

Hadoop QA commented on HADOOP-15440:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-15440 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15440 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921600/HADOOP-15440-trunk.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16438/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Support kerberos principal name pattern for KerberosAuthenticationHandler
> -
>
> Key: HADOOP-15440
> URL: https://issues.apache.org/jira/browse/HADOOP-15440
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15440-trunk.001.patch
>
>
> When setup HttpFS server or KMS server in security mode, we have to config 
> kerberos principal for these service, it doesn't support to convert Kerberos 
> principal name pattern to valid Kerberos principal names whereas 
> NameNode/DataNode and many other service can do that, so it makes confused 
> for users. so I propose to replace hostname pattern with hostname, which 
> should be fully-qualified domain name.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15681) AuthenticationFilter should generate valid date format for Set-Cookie header regardless of default Locale

2019-07-31 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897544#comment-16897544
 ] 

Wei-Chiu Chuang commented on HADOOP-15681:
--

Looks like a good fix. [~ste...@apache.org] if you have no objection I'll +1 
and commit this one.

> AuthenticationFilter should generate valid date format for Set-Cookie header 
> regardless of default Locale
> -
>
> Key: HADOOP-15681
> URL: https://issues.apache.org/jira/browse/HADOOP-15681
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.2.0
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Minor
> Attachments: HADOOP-15681.patch
>
>
> Hi guys,
> When I try to set up Hadoop Kerberos authentication for Solr (HTTP2), I met 
> this exception:
> {code}
> java.lang.IllegalArgumentException: null
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:435) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:409) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encodeValue(HpackEncoder.java:368) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:302) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:179) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generateHeaders(HeadersGenerator.java:72)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generate(HeadersGenerator.java:56)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.Generator.control(Generator.java:80) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.HTTP2Session$ControlEntry.generate(HTTP2Session.java:1163)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Flusher.process(HTTP2Flusher.java:184) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
>  ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224) 
> ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frame(HTTP2Session.java:685) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frames(HTTP2Session.java:657) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Stream.headers(HTTP2Stream.java:107) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.sendHeadersFrame(HttpTransportOverHTTP2.java:235)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.send(HttpTransportOverHTTP2.java:134)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:790) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:846) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:240) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:216) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:298) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpWriter.close(HttpWriter.java:49) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.ResponseWriter.close(ResponseWriter.java:163) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.Response.closeOutput(Response.java:1038) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.generateAcceptableResponse(ErrorHandler.java:178)
>  ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.doError(ErrorHandler.java:142) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>  

[jira] [Assigned] (HADOOP-15681) AuthenticationFilter should generate valid date format for Set-Cookie header regardless of default Locale

2019-07-31 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-15681:


Assignee: Cao Manh Dat

> AuthenticationFilter should generate valid date format for Set-Cookie header 
> regardless of default Locale
> -
>
> Key: HADOOP-15681
> URL: https://issues.apache.org/jira/browse/HADOOP-15681
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.2.0
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Minor
> Attachments: HADOOP-15681.patch
>
>
> Hi guys,
> When I try to set up Hadoop Kerberos authentication for Solr (HTTP2), I met 
> this exception:
> {code}
> java.lang.IllegalArgumentException: null
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:435) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:409) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encodeValue(HpackEncoder.java:368) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:302) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:179) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generateHeaders(HeadersGenerator.java:72)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generate(HeadersGenerator.java:56)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.Generator.control(Generator.java:80) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.HTTP2Session$ControlEntry.generate(HTTP2Session.java:1163)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Flusher.process(HTTP2Flusher.java:184) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
>  ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224) 
> ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frame(HTTP2Session.java:685) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frames(HTTP2Session.java:657) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Stream.headers(HTTP2Stream.java:107) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.sendHeadersFrame(HttpTransportOverHTTP2.java:235)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.send(HttpTransportOverHTTP2.java:134)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:790) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:846) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:240) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:216) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:298) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpWriter.close(HttpWriter.java:49) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.ResponseWriter.close(ResponseWriter.java:163) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.Response.closeOutput(Response.java:1038) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.generateAcceptableResponse(ErrorHandler.java:178)
>  ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.doError(ErrorHandler.java:142) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.handle(ErrorHandler.java:78) 
> 

[GitHub] [hadoop] sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: decommissioning in Ozone

2019-07-31 Thread GitBox
sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: 
decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#discussion_r309439259
 
 

 ##
 File path: hadoop-hdds/docs/content/design/decommissioning.md
 ##
 @@ -0,0 +1,720 @@
+---
+title: Decommissioning in Ozone
+summary: Formal process to shut down machines in a safe way after the required 
replications.
+date: 2019-07-31
+jira: HDDS-1881
+status: current
+author: Anu Engineer, Marton Elek, Stephen O'Donnell 
+---
+
+
+# Abstract 
+
+The goal of decommissioning is to turn off a selected set of machines without 
data loss. It may or may not require to move the existing replicas of the 
containers to other nodes.
+
+There are two main classes of the decommissioning:
+
+ * __Maintenance mode__: where the node is expected to be back after a while. 
It may not require replication of containers if enough replicas are available 
from other nodes (as we expect to have the current replicas after the restart.)
+
+ * __Decommissioning__: where the node won't be started again. All the data 
should be replicated according to the current replication rules.
+
+Goals:
+
+ * Decommissioning can be canceled any time
+ * The progress of the decommissioning should be trackable
+ * The nodes under decommissioning / maintenance mode should not been used for 
new pipelines / containers
+ * The state of the datanodes should be persisted / replicated by the SCM (in 
HDFS the decommissioning info exclude/include lists are replicated manually by 
the admin). If datanode is marked for decommissioning this state be available 
after SCM and/or Datanode restarts.  
+ * We need to support validations before decommissioing (but the violations 
can be ignored by the admin).
+ * The administrator should be notified when a node can be turned off.
+ * The maintenance mode can be time constrained: if the node marked for 
maintenance for 1 week and the node is not up after one week, the containers 
should be considered as lost (DEAD node) and should be replicated.
+
+# Introduction
+
+Ozone is a highly available file system that relies on commodity hardware. In 
other words, Ozone is designed to handle failures of these nodes all the time.
+
+The Storage Container Manager(SCM) is designed to monitor the node health and 
replicate blocks and containers as needed.
+
+At times, Operators of the cluster can help the SCM by giving it hints. When 
removing a datanode, the operator can provide a hint. That is, a planned 
failure of the node is coming up, and SCM can make sure it reaches a safe state 
to handle this planned failure.
+
+Some times, this failure is transient; that is, the operator is taking down 
this node temporarily. In that case, we can live with lower replica counts by 
being optimistic.
+
+Both of these operations, __Maintenance__, and __Decommissioning__ are similar 
from the Replication point of view. In both cases, and the user instructs us on 
how to handle an upcoming failure.
+
+Today, SCM (*Replication Manager* component inside SCM) understands only one 
form of failure handling. This paper extends Replica Manager failure modes to 
allow users to request which failure handling model to be adopted(Optimistic or 
Pessimistic).
+
+Based on physical realities, there are two responses to any perceived failure, 
to heal the system by taking corrective actions or ignore the failure since the 
actions in the future will heal the system automatically.
+
+## User Experiences (Decommissioning vs Maintenance mode)
+
+From the user's point of view, there are two kinds of planned failures that 
the user would like to communicate to Ozone.
+
+The first kind is when a 'real' failure is going to happen in the future. This 
'real' failure is the act of decommissioning. We denote this as "decommission" 
throughout this paper. The response that the user wants is SCM/Ozone to make 
replicas to deal with the planned failure.
+
+The second kind is when the failure is 'transient.' The user knows that this 
failure is temporary and cluster in most cases can safely ignore this issue. 
However, if the transient failures are going to cause a failure of 
availability; then the user would like the Ozone to take appropriate actions to 
address it.  An example of this case, is if the user put 3 data nodes into 
maintenance mode and switched them off.
+
+The transient failure can violate the availability guarantees of Ozone; Since 
the user is telling us not to take corrective actions. Many times, the user 
does not understand the impact on availability while asking Ozone to ignore the 
failure.
+
+So this paper proposes the following definitions for Decommission and 
Maintenance of data nodes.
+
+__Decommission__ of a data node is deemed to be complete when SCM/Ozone 
completes the replica of all containers on decommissioned data node to other 
data nodes.That is, the expected count matches the healthy count of containers 
in the cluster.
+

[GitHub] [hadoop] sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: decommissioning in Ozone

2019-07-31 Thread GitBox
sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: 
decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#discussion_r309438773
 
 

 ##
 File path: hadoop-hdds/docs/content/design/decommissioning.md
 ##
 @@ -0,0 +1,720 @@
+---
+title: Decommissioning in Ozone
+summary: Formal process to shut down machines in a safe way after the required 
replications.
+date: 2019-07-31
+jira: HDDS-1881
+status: current
+author: Anu Engineer, Marton Elek, Stephen O'Donnell 
+---
+
+
+# Abstract 
+
+The goal of decommissioning is to turn off a selected set of machines without 
data loss. It may or may not require to move the existing replicas of the 
containers to other nodes.
+
+There are two main classes of the decommissioning:
+
+ * __Maintenance mode__: where the node is expected to be back after a while. 
It may not require replication of containers if enough replicas are available 
from other nodes (as we expect to have the current replicas after the restart.)
+
+ * __Decommissioning__: where the node won't be started again. All the data 
should be replicated according to the current replication rules.
+
+Goals:
+
+ * Decommissioning can be canceled any time
+ * The progress of the decommissioning should be trackable
+ * The nodes under decommissioning / maintenance mode should not been used for 
new pipelines / containers
+ * The state of the datanodes should be persisted / replicated by the SCM (in 
HDFS the decommissioning info exclude/include lists are replicated manually by 
the admin). If datanode is marked for decommissioning this state be available 
after SCM and/or Datanode restarts.  
+ * We need to support validations before decommissioing (but the violations 
can be ignored by the admin).
+ * The administrator should be notified when a node can be turned off.
+ * The maintenance mode can be time constrained: if the node marked for 
maintenance for 1 week and the node is not up after one week, the containers 
should be considered as lost (DEAD node) and should be replicated.
+
+# Introduction
+
+Ozone is a highly available file system that relies on commodity hardware. In 
other words, Ozone is designed to handle failures of these nodes all the time.
+
+The Storage Container Manager(SCM) is designed to monitor the node health and 
replicate blocks and containers as needed.
+
+At times, Operators of the cluster can help the SCM by giving it hints. When 
removing a datanode, the operator can provide a hint. That is, a planned 
failure of the node is coming up, and SCM can make sure it reaches a safe state 
to handle this planned failure.
+
+Some times, this failure is transient; that is, the operator is taking down 
this node temporarily. In that case, we can live with lower replica counts by 
being optimistic.
+
+Both of these operations, __Maintenance__, and __Decommissioning__ are similar 
from the Replication point of view. In both cases, and the user instructs us on 
how to handle an upcoming failure.
+
+Today, SCM (*Replication Manager* component inside SCM) understands only one 
form of failure handling. This paper extends Replica Manager failure modes to 
allow users to request which failure handling model to be adopted(Optimistic or 
Pessimistic).
+
+Based on physical realities, there are two responses to any perceived failure, 
to heal the system by taking corrective actions or ignore the failure since the 
actions in the future will heal the system automatically.
+
+## User Experiences (Decommissioning vs Maintenance mode)
+
+From the user's point of view, there are two kinds of planned failures that 
the user would like to communicate to Ozone.
+
+The first kind is when a 'real' failure is going to happen in the future. This 
'real' failure is the act of decommissioning. We denote this as "decommission" 
throughout this paper. The response that the user wants is SCM/Ozone to make 
replicas to deal with the planned failure.
+
+The second kind is when the failure is 'transient.' The user knows that this 
failure is temporary and cluster in most cases can safely ignore this issue. 
However, if the transient failures are going to cause a failure of 
availability; then the user would like the Ozone to take appropriate actions to 
address it.  An example of this case, is if the user put 3 data nodes into 
maintenance mode and switched them off.
+
+The transient failure can violate the availability guarantees of Ozone; Since 
the user is telling us not to take corrective actions. Many times, the user 
does not understand the impact on availability while asking Ozone to ignore the 
failure.
+
+So this paper proposes the following definitions for Decommission and 
Maintenance of data nodes.
+
+__Decommission__ of a data node is deemed to be complete when SCM/Ozone 
completes the replica of all containers on decommissioned data node to other 
data nodes.That is, the expected count matches the healthy count of containers 
in the cluster.
+

[jira] [Assigned] (HADOOP-15440) Support kerberos principal name pattern for KerberosAuthenticationHandler

2019-07-31 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-15440:


Assignee: He Xiaoqiao

> Support kerberos principal name pattern for KerberosAuthenticationHandler
> -
>
> Key: HADOOP-15440
> URL: https://issues.apache.org/jira/browse/HADOOP-15440
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15440-trunk.001.patch
>
>
> When setup HttpFS server or KMS server in security mode, we have to config 
> kerberos principal for these service, it doesn't support to convert Kerberos 
> principal name pattern to valid Kerberos principal names whereas 
> NameNode/DataNode and many other service can do that, so it makes confused 
> for users. so I propose to replace hostname pattern with hostname, which 
> should be fully-qualified domain name.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15942) Change the logging level form DEBUG to ERROR for RuntimeErrorException in JMXJsonServlet

2019-07-31 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-15942:


Assignee: Anuhan Torgonshar

> Change the logging level form DEBUG to ERROR for RuntimeErrorException in 
> JMXJsonServlet
> 
>
> Key: HADOOP-15942
> URL: https://issues.apache.org/jira/browse/HADOOP-15942
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Anuhan Torgonshar
>Assignee: Anuhan Torgonshar
>Priority: Major
>  Labels: easyfix
> Attachments: HADOOP-15942.trunk.patch
>
>
> In *JMXJsonServlet.java* file, when invokes *MBeanServer.getAttribute()* 
> method, many catch clauses followed, and each of them contains a log 
> statement, most of them set the logging level to *ERROR*. However, when 
> catches *RuntimeErrorException* in line 348 (r1839798), the logging level of 
> log statement in this catch clause is *DEBUG*, the annotation indicates that 
> an unexpected failure occured in getAttribute method before, so I think the 
> logging level shuold be *ERROR* level too. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16479) ABFS FileStatus.getModificationTime returns localized time instead of UTC

2019-07-31 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897512#comment-16897512
 ] 

Steve Loughran commented on HADOOP-16479:
-

Probably unique to ABFS, though it shows we don't have a test for it.

[~tmarquardt] [~DanielZhou]

> ABFS FileStatus.getModificationTime returns localized time instead of UTC
> -
>
> Key: HADOOP-16479
> URL: https://issues.apache.org/jira/browse/HADOOP-16479
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Joan Sala Reixach
>Priority: Major
> Attachments: image-2019-07-31-18-21-53-023.png, 
> image-2019-07-31-18-23-37-349.png
>
>
> As per javadoc, the method FileStatus.getModificationTime() should return the 
> time in UTC, but it returns the time in the JVM timezone.
> The issue origins in AzureBlobFileSystemStore.getFileStatus() itself, since  
> parseLastModifiedTime() returns a wrong date. I have created a file in Azure 
> Data Lake Gen2 and when I look at  it through the Azure Explorer it shows the 
> correct modification time, but the method returns -2 hours time (I am in CET 
> = UTC+2).
> Azure Explorer last modified time:
> !image-2019-07-31-18-21-53-023.png|width=460,height=45!
> AbfsClient parseLastModifiedTime:
> !image-2019-07-31-18-23-37-349.png|width=459,height=284!
> It shows 15:21 CEST as utcDate, when it should be 15:21 UTC, which results in 
> the 2 hour loss.
> DateFormat.parse uses a localized calendar to parse dates which might be the 
> source of the issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16479) ABFS FileStatus.getModificationTime returns localized time instead of UTC

2019-07-31 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16479:

Summary: ABFS FileStatus.getModificationTime returns localized time instead 
of UTC  (was: FileStatus.getModificationTime returns localized time instead of 
UTC)

> ABFS FileStatus.getModificationTime returns localized time instead of UTC
> -
>
> Key: HADOOP-16479
> URL: https://issues.apache.org/jira/browse/HADOOP-16479
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Joan Sala Reixach
>Priority: Major
> Attachments: image-2019-07-31-18-21-53-023.png, 
> image-2019-07-31-18-23-37-349.png
>
>
> As per javadoc, the method FileStatus.getModificationTime() should return the 
> time in UTC, but it returns the time in the JVM timezone.
> The issue origins in AzureBlobFileSystemStore.getFileStatus() itself, since  
> parseLastModifiedTime() returns a wrong date. I have created a file in Azure 
> Data Lake Gen2 and when I look at  it through the Azure Explorer it shows the 
> correct modification time, but the method returns -2 hours time (I am in CET 
> = UTC+2).
> Azure Explorer last modified time:
> !image-2019-07-31-18-21-53-023.png|width=460,height=45!
> AbfsClient parseLastModifiedTime:
> !image-2019-07-31-18-23-37-349.png|width=459,height=284!
> It shows 15:21 CEST as utcDate, when it should be 15:21 UTC, which results in 
> the 2 hour loss.
> DateFormat.parse uses a localized calendar to parse dates which might be the 
> source of the issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16482) S3A doesn't actually verify paths have the correct authority

2019-07-31 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897501#comment-16897501
 ] 

Steve Loughran commented on HADOOP-16482:
-

straightforward to fix; we just make S3AFilesystem.qualify() verify authorities 
after the qualification process has completed.

> S3A doesn't actually verify paths have the correct authority
> 
>
> Key: HADOOP-16482
> URL: https://issues.apache.org/jira/browse/HADOOP-16482
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> Probably been around a *long* time, but we've never noticed, assuming that 
> {{Path.makeQualified(uri, workingDir)}} did the right thing.
> You can provide any s3a URI to an S3 command and it'll get mapped to the 
> current bucket without any validation that the authorities are equal. Oops.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16482) S3A doesn't actually verify paths have the correct authority

2019-07-31 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897499#comment-16897499
 ] 

Steve Loughran commented on HADOOP-16482:
-

Shows up on trunk now my cloudstore JAR has just added a {{filestatus}} command 
and I tried it with two paths, one referencing a bucket which doesn't exist

{code}
 bin/hadoop jar  cloudstore-0.1-SNAPSHOT.jar \
filestatus \
s3a://guarded-table/example s3a://guarded-example2/example
2019-07-31 21:17:15,372 [main] INFO  commands.PrintStatus 
(DurationInfo.java:(53)) - Starting: get path status
s3a://guarded-table/example S3AFileStatus{path=s3a://guarded-table/example; 
isDirectory=false; length=0; replication=1; blocksize=33554432; 
modification_time=156460268; access_time=0; owner=stevel; group=stevel; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=true; 
isErasureCoded=false} isEmptyDirectory=FALSE 
eTag=d41d8cd98f00b204e9800998ecf8427e versionId=null
s3a://guarded-example2/example  
S3AFileStatus{path=s3a://guarded-example2/example; isDirectory=false; length=0; 
replication=1; blocksize=33554432; modification_time=156460268; 
access_time=0; owner=stevel; group=stevel; permission=rw-rw-rw-; 
isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false} 
isEmptyDirectory=FALSE eTag=d41d8cd98f00b204e9800998ecf8427e versionId=null
2019-07-31 21:17:17,234 [main] INFO  commands.PrintStatus 
(DurationInfo.java:close(100)) - get path status: duration 0:01:863

Retrieved the status of 2 files, 932 milliseconds per file
{code}

> S3A doesn't actually verify paths have the correct authority
> 
>
> Key: HADOOP-16482
> URL: https://issues.apache.org/jira/browse/HADOOP-16482
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> Probably been around a *long* time, but we've never noticed, assuming that 
> {{Path.makeQualified(uri, workingDir)}} did the right thing.
> You can provide any s3a URI to an S3 command and it'll get mapped to the 
> current bucket without any validation that the authorities are equal. Oops.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16482) S3A doesn't actually verify paths have the correct authority

2019-07-31 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16482:
---

 Summary: S3A doesn't actually verify paths have the correct 
authority
 Key: HADOOP-16482
 URL: https://issues.apache.org/jira/browse/HADOOP-16482
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Steve Loughran


Probably been around a *long* time, but we've never noticed, assuming that 
{{Path.makeQualified(uri, workingDir)}} did the right thing.

You can provide any s3a URI to an S3 command and it'll get mapped to the 
current bucket without any validation that the authorities are equal. Oops.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster

2019-07-31 Thread GitBox
vivekratnavel commented on a change in pull request #1146: HDDS-1366. Add 
ability in Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r309408238
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,231 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 10Kb..,10MB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = 41;
+  private long maxFileSizeUpperBound = 1125899906842624L;
+  private long SIZE_512_TB = 562949953421312L;
+  private long[] upperBoundCount = new long[maxBinSize];
+  private long ONE_KB = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+
+fetchUpperBoundCount("reprocess");
+
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while(keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+} finally {
+  populateFileCountBySizeDB();
+}
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  private void fetchUpperBoundCount(String type) {
+if(type.equals("process")) {
+  List resultSet = fileCountBySizeDao.findAll();
+  int index = 0;
+  if(resultSet != null) {
+for (FileCountBySize row : resultSet) {
+  upperBoundCount[index] = row.getCount();
+  index++;
+}
+  }
+} else {
+  upperBoundCount = new long[maxBinSize];
+}
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  /**
+   * Read the Keys from update events and update the count of files
+   * pertaining to a certain upper bound.
+   *
+   * @param events Update events - PUT/DELETE.
+   * @return Pair
+   */
+  @Override
+  Pair process(OMUpdateEventBatch events) {
+LOG.info("Starting a 'process' run of FileSizeCountTask.");
+Iterator eventIterator = events.getIterator();
+
+fetchUpperBoundCount("process");
+
+while (eventIterator.hasNext()) {
+  OMDBUpdateEvent omdbUpdateEvent = 

[GitHub] [hadoop] hadoop-yetus commented on issue #1199: HDDS-1885. Fix bug in checkAcls in OzoneManager.

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1199: HDDS-1885. Fix bug in checkAcls in 
OzoneManager.
URL: https://github.com/apache/hadoop/pull/1199#issuecomment-516993326
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 79 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 602 | trunk passed |
   | +1 | compile | 351 | trunk passed |
   | +1 | checkstyle | 69 | trunk passed |
   | +1 | mvnsite | 1 | trunk passed |
   | +1 | shadedclient | 897 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | trunk passed |
   | 0 | spotbugs | 413 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 607 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 526 | the patch passed |
   | +1 | compile | 359 | the patch passed |
   | +1 | javac | 359 | the patch passed |
   | +1 | checkstyle | 72 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 708 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 644 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 353 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1903 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 7689 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1199/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1199 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f8d356895e42 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b008072 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1199/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1199/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1199/1/testReport/ |
   | Max. process+thread count | 4282 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1199/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1195: HDDS-1878. checkstyle error in ContainerStateMachine

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1195: HDDS-1878. checkstyle error in 
ContainerStateMachine
URL: https://github.com/apache/hadoop/pull/1195#issuecomment-516983772
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for branch |
   | +1 | mvninstall | 587 | trunk passed |
   | +1 | compile | 367 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 931 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 433 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 628 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 554 | the patch passed |
   | +1 | compile | 371 | the patch passed |
   | +1 | javac | 371 | the patch passed |
   | +1 | checkstyle | 36 | hadoop-hdds: The patch generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) |
   | +1 | checkstyle | 39 | The patch passed checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 715 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 648 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 348 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1931 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 7872 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1195/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1195 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2948a352e3c7 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a6f47b5 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1195/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1195/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1195/2/testReport/ |
   | Max. process+thread count | 4476 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1195/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: decommissioning in Ozone

2019-07-31 Thread GitBox
sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: 
decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#discussion_r309383882
 
 

 ##
 File path: hadoop-hdds/docs/content/design/decommissioning.md
 ##
 @@ -0,0 +1,720 @@
+---
+title: Decommissioning in Ozone
+summary: Formal process to shut down machines in a safe way after the required 
replications.
+date: 2019-07-31
+jira: HDDS-1881
+status: current
+author: Anu Engineer, Marton Elek, Stephen O'Donnell 
+---
+
+
+# Abstract 
+
+The goal of decommissioning is to turn off a selected set of machines without 
data loss. It may or may not require to move the existing replicas of the 
containers to other nodes.
+
+There are two main classes of the decommissioning:
+
+ * __Maintenance mode__: where the node is expected to be back after a while. 
It may not require replication of containers if enough replicas are available 
from other nodes (as we expect to have the current replicas after the restart.)
+
+ * __Decommissioning__: where the node won't be started again. All the data 
should be replicated according to the current replication rules.
+
+Goals:
+
+ * Decommissioning can be canceled any time
+ * The progress of the decommissioning should be trackable
+ * The nodes under decommissioning / maintenance mode should not been used for 
new pipelines / containers
+ * The state of the datanodes should be persisted / replicated by the SCM (in 
HDFS the decommissioning info exclude/include lists are replicated manually by 
the admin). If datanode is marked for decommissioning this state be available 
after SCM and/or Datanode restarts.  
+ * We need to support validations before decommissioing (but the violations 
can be ignored by the admin).
+ * The administrator should be notified when a node can be turned off.
+ * The maintenance mode can be time constrained: if the node marked for 
maintenance for 1 week and the node is not up after one week, the containers 
should be considered as lost (DEAD node) and should be replicated.
+
+# Introduction
+
+Ozone is a highly available file system that relies on commodity hardware. In 
other words, Ozone is designed to handle failures of these nodes all the time.
+
+The Storage Container Manager(SCM) is designed to monitor the node health and 
replicate blocks and containers as needed.
+
+At times, Operators of the cluster can help the SCM by giving it hints. When 
removing a datanode, the operator can provide a hint. That is, a planned 
failure of the node is coming up, and SCM can make sure it reaches a safe state 
to handle this planned failure.
+
+Some times, this failure is transient; that is, the operator is taking down 
this node temporarily. In that case, we can live with lower replica counts by 
being optimistic.
+
+Both of these operations, __Maintenance__, and __Decommissioning__ are similar 
from the Replication point of view. In both cases, and the user instructs us on 
how to handle an upcoming failure.
+
+Today, SCM (*Replication Manager* component inside SCM) understands only one 
form of failure handling. This paper extends Replica Manager failure modes to 
allow users to request which failure handling model to be adopted(Optimistic or 
Pessimistic).
+
+Based on physical realities, there are two responses to any perceived failure, 
to heal the system by taking corrective actions or ignore the failure since the 
actions in the future will heal the system automatically.
+
+## User Experiences (Decommissioning vs Maintenance mode)
+
+From the user's point of view, there are two kinds of planned failures that 
the user would like to communicate to Ozone.
+
+The first kind is when a 'real' failure is going to happen in the future. This 
'real' failure is the act of decommissioning. We denote this as "decommission" 
throughout this paper. The response that the user wants is SCM/Ozone to make 
replicas to deal with the planned failure.
+
+The second kind is when the failure is 'transient.' The user knows that this 
failure is temporary and cluster in most cases can safely ignore this issue. 
However, if the transient failures are going to cause a failure of 
availability; then the user would like the Ozone to take appropriate actions to 
address it.  An example of this case, is if the user put 3 data nodes into 
maintenance mode and switched them off.
+
+The transient failure can violate the availability guarantees of Ozone; Since 
the user is telling us not to take corrective actions. Many times, the user 
does not understand the impact on availability while asking Ozone to ignore the 
failure.
+
+So this paper proposes the following definitions for Decommission and 
Maintenance of data nodes.
+
+__Decommission__ of a data node is deemed to be complete when SCM/Ozone 
completes the replica of all containers on decommissioned data node to other 
data nodes.That is, the expected count matches the healthy count of containers 
in the cluster.
+

[GitHub] [hadoop] sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: decommissioning in Ozone

2019-07-31 Thread GitBox
sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: 
decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#discussion_r309383450
 
 

 ##
 File path: hadoop-hdds/docs/content/design/decommissioning.md
 ##
 @@ -0,0 +1,720 @@
+---
+title: Decommissioning in Ozone
+summary: Formal process to shut down machines in a safe way after the required 
replications.
+date: 2019-07-31
+jira: HDDS-1881
+status: current
+author: Anu Engineer, Marton Elek, Stephen O'Donnell 
+---
+
+
+# Abstract 
+
+The goal of decommissioning is to turn off a selected set of machines without 
data loss. It may or may not require to move the existing replicas of the 
containers to other nodes.
+
+There are two main classes of the decommissioning:
+
+ * __Maintenance mode__: where the node is expected to be back after a while. 
It may not require replication of containers if enough replicas are available 
from other nodes (as we expect to have the current replicas after the restart.)
+
+ * __Decommissioning__: where the node won't be started again. All the data 
should be replicated according to the current replication rules.
+
+Goals:
+
+ * Decommissioning can be canceled any time
+ * The progress of the decommissioning should be trackable
+ * The nodes under decommissioning / maintenance mode should not been used for 
new pipelines / containers
+ * The state of the datanodes should be persisted / replicated by the SCM (in 
HDFS the decommissioning info exclude/include lists are replicated manually by 
the admin). If datanode is marked for decommissioning this state be available 
after SCM and/or Datanode restarts.  
+ * We need to support validations before decommissioing (but the violations 
can be ignored by the admin).
+ * The administrator should be notified when a node can be turned off.
+ * The maintenance mode can be time constrained: if the node marked for 
maintenance for 1 week and the node is not up after one week, the containers 
should be considered as lost (DEAD node) and should be replicated.
+
+# Introduction
+
+Ozone is a highly available file system that relies on commodity hardware. In 
other words, Ozone is designed to handle failures of these nodes all the time.
+
+The Storage Container Manager(SCM) is designed to monitor the node health and 
replicate blocks and containers as needed.
+
+At times, Operators of the cluster can help the SCM by giving it hints. When 
removing a datanode, the operator can provide a hint. That is, a planned 
failure of the node is coming up, and SCM can make sure it reaches a safe state 
to handle this planned failure.
+
+Some times, this failure is transient; that is, the operator is taking down 
this node temporarily. In that case, we can live with lower replica counts by 
being optimistic.
+
+Both of these operations, __Maintenance__, and __Decommissioning__ are similar 
from the Replication point of view. In both cases, and the user instructs us on 
how to handle an upcoming failure.
+
+Today, SCM (*Replication Manager* component inside SCM) understands only one 
form of failure handling. This paper extends Replica Manager failure modes to 
allow users to request which failure handling model to be adopted(Optimistic or 
Pessimistic).
+
+Based on physical realities, there are two responses to any perceived failure, 
to heal the system by taking corrective actions or ignore the failure since the 
actions in the future will heal the system automatically.
+
+## User Experiences (Decommissioning vs Maintenance mode)
+
+From the user's point of view, there are two kinds of planned failures that 
the user would like to communicate to Ozone.
+
+The first kind is when a 'real' failure is going to happen in the future. This 
'real' failure is the act of decommissioning. We denote this as "decommission" 
throughout this paper. The response that the user wants is SCM/Ozone to make 
replicas to deal with the planned failure.
+
+The second kind is when the failure is 'transient.' The user knows that this 
failure is temporary and cluster in most cases can safely ignore this issue. 
However, if the transient failures are going to cause a failure of 
availability; then the user would like the Ozone to take appropriate actions to 
address it.  An example of this case, is if the user put 3 data nodes into 
maintenance mode and switched them off.
+
+The transient failure can violate the availability guarantees of Ozone; Since 
the user is telling us not to take corrective actions. Many times, the user 
does not understand the impact on availability while asking Ozone to ignore the 
failure.
+
+So this paper proposes the following definitions for Decommission and 
Maintenance of data nodes.
+
+__Decommission__ of a data node is deemed to be complete when SCM/Ozone 
completes the replica of all containers on decommissioned data node to other 
data nodes.That is, the expected count matches the healthy count of containers 
in the cluster.
+

[GitHub] [hadoop] pingsutw edited a comment on issue #1090: SUBMARINE-72 Kill and destroy the job through the submarine client

2019-07-31 Thread GitBox
pingsutw edited a comment on issue #1090: SUBMARINE-72 Kill and destroy the job 
through the submarine client
URL: https://github.com/apache/hadoop/pull/1090#issuecomment-516968944
 
 
   Update the patch 
   
   - KIlljob() -> killjob(), because the function should start with a lower 
case letter
   
   - LOG before print usage 
   
   - print additional information if appAdminClient stop or destroy job error 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pingsutw commented on issue #1090: SUBMARINE-72 Kill and destroy the job through the submarine client

2019-07-31 Thread GitBox
pingsutw commented on issue #1090: SUBMARINE-72 Kill and destroy the job 
through the submarine client
URL: https://github.com/apache/hadoop/pull/1090#issuecomment-516968944
 
 
   Upload the patch 
   
   - KIlljob() -> killjob(), because the function should start with a lower 
case letter
   
   - LOG before print usage 
   
   - print additional information if appAdminClient stop or destroy job error 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16481) ITestS3GuardDDBRootOperations.test_300_MetastorePrune needs to set region

2019-07-31 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16481:
---

 Summary: ITestS3GuardDDBRootOperations.test_300_MetastorePrune 
needs to set region
 Key: HADOOP-16481
 URL: https://issues.apache.org/jira/browse/HADOOP-16481
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


The new  test {{ITestS3GuardDDBRootOperations.test_300_MetastorePrune}} fails 
if you don't explicitly set the region
{code}
[ERROR] 
test_300_MetastorePrune(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations)
  Time elapsed: 0.845 s  <<< ERROR!
org.apache.hadoop.util.ExitUtil$ExitException: No region found from -region 
flag, config, or S3 bucket
at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations.test_300_MetastorePrune(ITestS3GuardDDBRootOperations.java:186)
{code}
it should be picked up from the test filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1160: HADOOP-16458 LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3

2019-07-31 Thread GitBox
steveloughran commented on a change in pull request #1160: HADOOP-16458 
LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3
URL: https://github.com/apache/hadoop/pull/1160#discussion_r309361930
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestLocatedFileStatusFetcher.java
 ##
 @@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Test the LocatedFileStatusFetcher can do.
+ * This is related to HADOOP-16458.
+ * There's basic tests in ITestS3AFSMainOperations; this
+ * is see if we can create better corner cases.
+ */
+public class ITestLocatedFileStatusFetcher extends AbstractS3ATestBase {
 
 Review comment:
   I cut the file; all tests live in the ITestRestrictedRead operation.
   
   Reason: a basic scan isn't that interesting, trying to break it is, and the 
test bucket I was working with was giving me list access to the store but not 
read on files. I wanted to see if that was the problem. i..e. it was the 
reporting that was at fault. It isn't, but there's  a good sequence of work 
there combining basic LocatedStatusFetcher operations which work with those 
that don't


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1160: HADOOP-16458 LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3

2019-07-31 Thread GitBox
steveloughran commented on issue #1160: HADOOP-16458 
LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3
URL: https://github.com/apache/hadoop/pull/1160#issuecomment-516959483
 
 
   Unless this test is confusing DDB so there's no cleanup, I'd point to those 
test failures less as a sign of a problem here and more that our test runs 
still seem somehow to cause confusion, even after the tombstone fixes.
   
   `ITestS3GuardDDBRootOperations` will save the results of its scans to 
somewhere under target/ which you can use to see where a discrepancy has crept 
in.  Usually its something in s3 which isn't in s3guard: an OOB event of some 
kind


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster

2019-07-31 Thread GitBox
swagle commented on a change in pull request #1146: HDDS-1366. Add ability in 
Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r309349175
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,231 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 10Kb..,10MB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = 41;
+  private long maxFileSizeUpperBound = 1125899906842624L;
+  private long SIZE_512_TB = 562949953421312L;
+  private long[] upperBoundCount = new long[maxBinSize];
+  private long ONE_KB = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+
+fetchUpperBoundCount("reprocess");
+
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while(keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+} finally {
+  populateFileCountBySizeDB();
+}
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  private void fetchUpperBoundCount(String type) {
+if(type.equals("process")) {
+  List resultSet = fileCountBySizeDao.findAll();
+  int index = 0;
+  if(resultSet != null) {
+for (FileCountBySize row : resultSet) {
+  upperBoundCount[index] = row.getCount();
+  index++;
+}
+  }
+} else {
+  upperBoundCount = new long[maxBinSize];
+}
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  /**
+   * Read the Keys from update events and update the count of files
+   * pertaining to a certain upper bound.
+   *
+   * @param events Update events - PUT/DELETE.
+   * @return Pair
+   */
+  @Override
+  Pair process(OMUpdateEventBatch events) {
+LOG.info("Starting a 'process' run of FileSizeCountTask.");
+Iterator eventIterator = events.getIterator();
+
+fetchUpperBoundCount("process");
+
+while (eventIterator.hasNext()) {
+  OMDBUpdateEvent omdbUpdateEvent = 

[GitHub] [hadoop] swagle commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster

2019-07-31 Thread GitBox
swagle commented on a change in pull request #1146: HDDS-1366. Add ability in 
Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r309347235
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,231 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 10Kb..,10MB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = 41;
+  private long maxFileSizeUpperBound = 1125899906842624L;
+  private long SIZE_512_TB = 562949953421312L;
+  private long[] upperBoundCount = new long[maxBinSize];
+  private long ONE_KB = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+
+fetchUpperBoundCount("reprocess");
+
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while(keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+} finally {
+  populateFileCountBySizeDB();
+}
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  private void fetchUpperBoundCount(String type) {
+if(type.equals("process")) {
+  List resultSet = fileCountBySizeDao.findAll();
+  int index = 0;
+  if(resultSet != null) {
+for (FileCountBySize row : resultSet) {
+  upperBoundCount[index] = row.getCount();
+  index++;
+}
+  }
+} else {
+  upperBoundCount = new long[maxBinSize];
+}
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  /**
+   * Read the Keys from update events and update the count of files
+   * pertaining to a certain upper bound.
+   *
+   * @param events Update events - PUT/DELETE.
+   * @return Pair
+   */
+  @Override
+  Pair process(OMUpdateEventBatch events) {
+LOG.info("Starting a 'process' run of FileSizeCountTask.");
+Iterator eventIterator = events.getIterator();
+
+fetchUpperBoundCount("process");
+
+while (eventIterator.hasNext()) {
+  OMDBUpdateEvent omdbUpdateEvent = 

[GitHub] [hadoop] swagle commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster

2019-07-31 Thread GitBox
swagle commented on a change in pull request #1146: HDDS-1366. Add ability in 
Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r309349175
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,231 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 10Kb..,10MB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = 41;
+  private long maxFileSizeUpperBound = 1125899906842624L;
+  private long SIZE_512_TB = 562949953421312L;
+  private long[] upperBoundCount = new long[maxBinSize];
+  private long ONE_KB = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+
+fetchUpperBoundCount("reprocess");
+
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while(keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+} finally {
+  populateFileCountBySizeDB();
+}
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  private void fetchUpperBoundCount(String type) {
+if(type.equals("process")) {
+  List resultSet = fileCountBySizeDao.findAll();
+  int index = 0;
+  if(resultSet != null) {
+for (FileCountBySize row : resultSet) {
+  upperBoundCount[index] = row.getCount();
+  index++;
+}
+  }
+} else {
+  upperBoundCount = new long[maxBinSize];
+}
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  /**
+   * Read the Keys from update events and update the count of files
+   * pertaining to a certain upper bound.
+   *
+   * @param events Update events - PUT/DELETE.
+   * @return Pair
+   */
+  @Override
+  Pair process(OMUpdateEventBatch events) {
+LOG.info("Starting a 'process' run of FileSizeCountTask.");
+Iterator eventIterator = events.getIterator();
+
+fetchUpperBoundCount("process");
+
+while (eventIterator.hasNext()) {
+  OMDBUpdateEvent omdbUpdateEvent = 

[GitHub] [hadoop] swagle commented on a change in pull request #1146: HDDS-1366. Add ability in Recon to track the number of small files in an Ozone Cluster

2019-07-31 Thread GitBox
swagle commented on a change in pull request #1146: HDDS-1366. Add ability in 
Recon to track the number of small files in an Ozone Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r309347235
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,231 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 10Kb..,10MB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = 41;
+  private long maxFileSizeUpperBound = 1125899906842624L;
+  private long SIZE_512_TB = 562949953421312L;
+  private long[] upperBoundCount = new long[maxBinSize];
+  private long ONE_KB = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+
+fetchUpperBoundCount("reprocess");
+
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while(keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+} finally {
+  populateFileCountBySizeDB();
+}
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  private void fetchUpperBoundCount(String type) {
+if(type.equals("process")) {
+  List resultSet = fileCountBySizeDao.findAll();
+  int index = 0;
+  if(resultSet != null) {
+for (FileCountBySize row : resultSet) {
+  upperBoundCount[index] = row.getCount();
+  index++;
+}
+  }
+} else {
+  upperBoundCount = new long[maxBinSize];
+}
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  /**
+   * Read the Keys from update events and update the count of files
+   * pertaining to a certain upper bound.
+   *
+   * @param events Update events - PUT/DELETE.
+   * @return Pair
+   */
+  @Override
+  Pair process(OMUpdateEventBatch events) {
+LOG.info("Starting a 'process' run of FileSizeCountTask.");
+Iterator eventIterator = events.getIterator();
+
+fetchUpperBoundCount("process");
+
+while (eventIterator.hasNext()) {
+  OMDBUpdateEvent omdbUpdateEvent = 

[GitHub] [hadoop] bharatviswa504 opened a new pull request #1199: HDDS-1885. Fix bug in checkAcls in OzoneManager.

2019-07-31 Thread GitBox
bharatviswa504 opened a new pull request #1199: HDDS-1885. Fix bug in checkAcls 
in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1199
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16398) Exports Hadoop metrics to Prometheus

2019-07-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897366#comment-16897366
 ] 

Hudson commented on HADOOP-16398:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17017 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17017/])
HADOOP-16398. Exports Hadoop metrics to Prometheus (#1170) (github: rev 
8bda91d20ab248a0d262d396646861113195f3ed)
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/PrometheusServlet.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/PrometheusMetricsSink.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/sink/TestPrometheusMetricsSink.java
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java


> Exports Hadoop metrics to Prometheus
> 
>
> Key: HADOOP-16398
> URL: https://issues.apache.org/jira/browse/HADOOP-16398
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16398.001.patch, HADOOP-16398.002.patch
>
>
> Hadoop common side of HDDS-846. HDDS already have its own 
> PrometheusMetricsSink, so we can reuse the implementation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #1190: HDFS-14681: RBF: TestDisableRouterQuota failed because port 8888 was …

2019-07-31 Thread GitBox
goiri merged pull request #1190: HDFS-14681: RBF: TestDisableRouterQuota failed 
because port  was …
URL: https://github.com/apache/hadoop/pull/1190
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao opened a new pull request #1198: HDFS-14034: Support getQuotaUsage API in WebHDFS

2019-07-31 Thread GitBox
sunchao opened a new pull request #1198: HDFS-14034: Support getQuotaUsage API 
in WebHDFS
URL: https://github.com/apache/hadoop/pull/1198
 
 
   This backports HDFS-14034 to branch-2.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16398) Exports Hadoop metrics to Prometheus

2019-07-31 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16398:
---
Release Note: If "hadoop.prometheus.endpoint.enabled" is set to true, 
Prometheus-friendly formatted metrics can be obtained from '/prom' endpoint of 
Hadoop daemons. The default value of the property is false.

> Exports Hadoop metrics to Prometheus
> 
>
> Key: HADOOP-16398
> URL: https://issues.apache.org/jira/browse/HADOOP-16398
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16398.001.patch, HADOOP-16398.002.patch
>
>
> Hadoop common side of HDDS-846. HDDS already have its own 
> PrometheusMetricsSink, so we can reuse the implementation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #1170: HADOOP-16398. Exports Hadoop metrics to Prometheus

2019-07-31 Thread GitBox
aajisaka merged pull request #1170: HADOOP-16398. Exports Hadoop metrics to 
Prometheus
URL: https://github.com/apache/hadoop/pull/1170
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #1170: HADOOP-16398. Exports Hadoop metrics to Prometheus

2019-07-31 Thread GitBox
aajisaka commented on issue #1170: HADOOP-16398. Exports Hadoop metrics to 
Prometheus
URL: https://github.com/apache/hadoop/pull/1170#issuecomment-516938642
 
 
   Thanks @anuengineer and @adamantal for the reviews. Merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16458) LocatedFileStatusFetcher scans failing intermittently against S3 store

2019-07-31 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16458:

Summary: LocatedFileStatusFetcher scans failing intermittently against S3 
store  (was: LocatedFileStatusFetcher.getFileStatuses failing intermittently 
against S3 store)

> LocatedFileStatusFetcher scans failing intermittently against S3 store
> --
>
> Key: HADOOP-16458
> URL: https://issues.apache.org/jira/browse/HADOOP-16458
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
> Environment: S3 + S3Guard
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Intermittent failure of LocatedFileStatusFetcher.getFileStatuses(), which is 
> using globStatus to find files.
> I'd say "turn s3guard on" except this appears to be the case, and the dataset 
> being read is
> over 1h old.
> Which means it is harder than I'd like to blame S3 for what would sound like 
> an inconsistency
> We're hampered by the number of debug level statements in the globber code 
> being approximately none; there's no debugging to turn on. All we know is 
> that globFiles returns null without any explanation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16398) Exports Hadoop metrics to Prometheus

2019-07-31 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16398:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Merged into trunk.

> Exports Hadoop metrics to Prometheus
> 
>
> Key: HADOOP-16398
> URL: https://issues.apache.org/jira/browse/HADOOP-16398
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16398.001.patch, HADOOP-16398.002.patch
>
>
> Hadoop common side of HDDS-846. HDDS already have its own 
> PrometheusMetricsSink, so we can reuse the implementation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16476) Intermittent failure of ITestS3GuardConcurrentOps#testConcurrentTableCreations

2019-07-31 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897345#comment-16897345
 ] 

Steve Loughran commented on HADOOP-16476:
-

Seen this too, while some network things were playing up

Two point:
* Whole test to 300 seconds; locking up one of my 12 threads for 5 minutes. 
Retrying more will make the slow tests even slower
* Do we need to run these tests *at all* 

If we do want to retain this test, let's declare it a scale test

{code}
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 300.816 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps
[ERROR] 
testConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps)
  Time elapsed: 300.012 s  <<< ERROR!
java.lang.Exception: test timed out after 30 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
com.amazonaws.waiters.FixedDelayStrategy.delayBeforeNextRetry(FixedDelayStrategy.java:45)
at 
com.amazonaws.waiters.WaiterExecution.safeCustomDelay(WaiterExecution.java:118)
at 
com.amazonaws.waiters.WaiterExecution.pollResource(WaiterExecution.java:82)
at com.amazonaws.waiters.WaiterImpl.run(WaiterImpl.java:88)
at 
com.amazonaws.services.dynamodbv2.document.Table.waitForDelete(Table.java:502)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.deleteTable(ITestS3GuardConcurrentOps.java:77)
at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:168)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

{code}

> Intermittent failure of ITestS3GuardConcurrentOps#testConcurrentTableCreations
> --
>
> Key: HADOOP-16476
> URL: https://issues.apache.org/jira/browse/HADOOP-16476
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Priority: Minor
>
> Test is failing intermittently. One possible solution would be to wait 
> (retry) more because the table will be deleted eventually - it's not there 
> after the whole test run.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 142.471 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps
> [ERROR] 
> testConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps)
>   Time elapsed: 142.286 s  <<< ERROR!
> java.lang.IllegalArgumentException: Table 
> s3guard.test.testConcurrentTableCreations-1265635747 is not deleted.
>   at 
> com.amazonaws.services.dynamodbv2.document.Table.waitForDelete(Table.java:505)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.deleteTable(ITestS3GuardConcurrentOps.java:87)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:178)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 

[jira] [Updated] (HADOOP-16480) S3 Select Exceptions are not being converted to IOEs

2019-07-31 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16480:

Component/s: fs/s3

> S3 Select Exceptions are not being converted to IOEs
> 
>
> Key: HADOOP-16480
> URL: https://issues.apache.org/jira/browse/HADOOP-16480
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Network outage seems to have raised a SelectObjectContentEventException 
> exception; it's not been translated to an IOE.
> Issue: recoverable or not? A normal input stream would try to recover by 
> re-opening at the current position, but to restart a seek you'd have to 
> repeat the entire streaming. 
> For now, fail.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16480) S3 Select Exceptions are not being converted to IOEs

2019-07-31 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16480:

Affects Version/s: 3.3.0

> S3 Select Exceptions are not being converted to IOEs
> 
>
> Key: HADOOP-16480
> URL: https://issues.apache.org/jira/browse/HADOOP-16480
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Network outage seems to have raised a SelectObjectContentEventException 
> exception; it's not been translated to an IOE.
> Issue: recoverable or not? A normal input stream would try to recover by 
> re-opening at the current position, but to restart a seek you'd have to 
> repeat the entire streaming. 
> For now, fail.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16480) S3 Select Exceptions are not being converted to IOEs

2019-07-31 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897335#comment-16897335
 ] 

Steve Loughran commented on HADOOP-16480:
-

{code}
[ERROR] 
testReadLandsatRecordsV1NoResults(org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat)
  Time elapsed: 9.361 s  <<< ERROR!
com.amazonaws.services.s3.model.SelectObjectContentEventException: Failed to 
read S3 select event.
at 
com.amazonaws.services.s3.model.SelectObjectContentEventStream$LazyLoadedIterator.advanceIfNeeded(SelectObjectContentEventStream.java:318)
at 
com.amazonaws.services.s3.model.SelectObjectContentEventStream$LazyLoadedIterator.hasNext(SelectObjectContentEventStream.java:292)
at 
com.amazonaws.services.s3.model.SelectObjectContentEventStream$EventStreamEnumeration.getNext(SelectObjectContentEventStream.java:244)
at 
com.amazonaws.services.s3.model.SelectObjectContentEventStream$LazyLoadedIterator.advanceIfNeeded(SelectObjectContentEventStream.java:315)
at 
com.amazonaws.services.s3.model.SelectObjectContentEventStream$LazyLoadedIterator.hasNext(SelectObjectContentEventStream.java:292)
at 
com.amazonaws.services.s3.model.SelectObjectContentEventStream$EventStreamEnumeration.hasMoreElements(SelectObjectContentEventStream.java:273)
at java.io.SequenceInputStream.nextStream(SequenceInputStream.java:109)
at java.io.SequenceInputStream.read(SequenceInputStream.java:211)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
at 
org.apache.hadoop.fs.s3a.select.SelectInputStream.read(SelectInputStream.java:282)
at java.io.DataInputStream.read(DataInputStream.java:100)
at 
org.apache.hadoop.io.compress.PassthroughCodec$PassthroughDecompressorStream.read(PassthroughCodec.java:169)
at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:182)
at org.apache.hadoop.util.LineReader.readCustomLine(LineReader.java:306)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
at 
org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:158)
at 
org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:198)
at 
org.apache.hadoop.fs.s3a.select.AbstractS3SelectTest.readRecords(AbstractS3SelectTest.java:577)
at 
org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat.testReadLandsatRecordsV1NoResults(ITestS3SelectLandsat.java:317)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
com.amazonaws.thirdparty.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:264)
at 

[jira] [Created] (HADOOP-16480) S3 Select Exceptions are not being converted to IOEs

2019-07-31 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16480:
---

 Summary: S3 Select Exceptions are not being converted to IOEs
 Key: HADOOP-16480
 URL: https://issues.apache.org/jira/browse/HADOOP-16480
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Steve Loughran


Network outage seems to have raised a SelectObjectContentEventException 
exception; it's not been translated to an IOE.

Issue: recoverable or not? A normal input stream would try to recover by 
re-opening at the current position, but to restart a seek you'd have to repeat 
the entire streaming. 

For now, fail.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1147: HDDS-1619. Support volume addACL operations for OM HA. Contributed by…

2019-07-31 Thread GitBox
bharatviswa504 commented on issue #1147: HDDS-1619. Support volume addACL 
operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-516935599
 
 
   To handle Non-HA also with new HA code, we have done some changes in 
HDDS-1856. So, this Jira needs a few more changes like adding to double-buffer 
and setFlushFuture in validateAndUpdateCache.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1167: HDDS-1863. Freon RandomKeyGenerator even if keySize is set to 0, it returns some random data to key.

2019-07-31 Thread GitBox
bharatviswa504 commented on a change in pull request #1167: HDDS-1863. Freon 
RandomKeyGenerator even if keySize is set to 0, it returns some random data to 
key.
URL: https://github.com/apache/hadoop/pull/1167#discussion_r309330700
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -263,9 +262,7 @@ public Void call() throws Exception {
 // Compute the common initial digest for all keys without their UUID
 if (validateWrites) {
   commonInitialMD = DigestUtils.getDigest(DIGEST_ALGORITHM);
-  int uuidLength = UUID.randomUUID().toString().length();
-  keySize = Math.max(uuidLength, keySize);
-  for (long nrRemaining = keySize - uuidLength; nrRemaining > 0;
+  for (long nrRemaining = keySize; nrRemaining > 0;
 
 Review comment:
   Opened https://issues.apache.org/jira/browse/HDDS-1883 for this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1167: HDDS-1863. Freon RandomKeyGenerator even if keySize is set to 0, it returns some random data to key.

2019-07-31 Thread GitBox
bharatviswa504 commented on a change in pull request #1167: HDDS-1863. Freon 
RandomKeyGenerator even if keySize is set to 0, it returns some random data to 
key.
URL: https://github.com/apache/hadoop/pull/1167#discussion_r309329802
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -263,9 +262,7 @@ public Void call() throws Exception {
 // Compute the common initial digest for all keys without their UUID
 if (validateWrites) {
   commonInitialMD = DigestUtils.getDigest(DIGEST_ALGORITHM);
-  int uuidLength = UUID.randomUUID().toString().length();
-  keySize = Math.max(uuidLength, keySize);
-  for (long nrRemaining = keySize - uuidLength; nrRemaining > 0;
+  for (long nrRemaining = keySize; nrRemaining > 0;
 
 Review comment:
   Yes, we don't enter inside for loop if keySize passed is negative.
   
   If we want to add keySize checks, i think it might be applicable for all the 
parameters like numVolumes, numBuckets. I will open a new jira to handle this.
   
   So, if any parameter is incorrect, show the error message to the user or any 
other idea how to handle this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: decommissioning in Ozone

2019-07-31 Thread GitBox
sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: 
decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#discussion_r309329492
 
 

 ##
 File path: hadoop-hdds/docs/content/design/decommissioning.md
 ##
 @@ -0,0 +1,720 @@
+---
+title: Decommissioning in Ozone
+summary: Formal process to shut down machines in a safe way after the required 
replications.
+date: 2019-07-31
+jira: HDDS-1881
+status: current
+author: Anu Engineer, Marton Elek, Stephen O'Donnell 
+---
+
+
+# Abstract 
+
+The goal of decommissioning is to turn off a selected set of machines without 
data loss. It may or may not require to move the existing replicas of the 
containers to other nodes.
+
+There are two main classes of the decommissioning:
+
+ * __Maintenance mode__: where the node is expected to be back after a while. 
It may not require replication of containers if enough replicas are available 
from other nodes (as we expect to have the current replicas after the restart.)
+
+ * __Decommissioning__: where the node won't be started again. All the data 
should be replicated according to the current replication rules.
+
+Goals:
+
+ * Decommissioning can be canceled any time
+ * The progress of the decommissioning should be trackable
+ * The nodes under decommissioning / maintenance mode should not been used for 
new pipelines / containers
+ * The state of the datanodes should be persisted / replicated by the SCM (in 
HDFS the decommissioning info exclude/include lists are replicated manually by 
the admin). If datanode is marked for decommissioning this state be available 
after SCM and/or Datanode restarts.  
+ * We need to support validations before decommissioing (but the violations 
can be ignored by the admin).
+ * The administrator should be notified when a node can be turned off.
+ * The maintenance mode can be time constrained: if the node marked for 
maintenance for 1 week and the node is not up after one week, the containers 
should be considered as lost (DEAD node) and should be replicated.
+
+# Introduction
+
+Ozone is a highly available file system that relies on commodity hardware. In 
other words, Ozone is designed to handle failures of these nodes all the time.
+
+The Storage Container Manager(SCM) is designed to monitor the node health and 
replicate blocks and containers as needed.
+
+At times, Operators of the cluster can help the SCM by giving it hints. When 
removing a datanode, the operator can provide a hint. That is, a planned 
failure of the node is coming up, and SCM can make sure it reaches a safe state 
to handle this planned failure.
+
+Some times, this failure is transient; that is, the operator is taking down 
this node temporarily. In that case, we can live with lower replica counts by 
being optimistic.
+
+Both of these operations, __Maintenance__, and __Decommissioning__ are similar 
from the Replication point of view. In both cases, and the user instructs us on 
how to handle an upcoming failure.
+
+Today, SCM (*Replication Manager* component inside SCM) understands only one 
form of failure handling. This paper extends Replica Manager failure modes to 
allow users to request which failure handling model to be adopted(Optimistic or 
Pessimistic).
+
+Based on physical realities, there are two responses to any perceived failure, 
to heal the system by taking corrective actions or ignore the failure since the 
actions in the future will heal the system automatically.
+
+## User Experiences (Decommissioning vs Maintenance mode)
+
+From the user's point of view, there are two kinds of planned failures that 
the user would like to communicate to Ozone.
+
+The first kind is when a 'real' failure is going to happen in the future. This 
'real' failure is the act of decommissioning. We denote this as "decommission" 
throughout this paper. The response that the user wants is SCM/Ozone to make 
replicas to deal with the planned failure.
+
+The second kind is when the failure is 'transient.' The user knows that this 
failure is temporary and cluster in most cases can safely ignore this issue. 
However, if the transient failures are going to cause a failure of 
availability; then the user would like the Ozone to take appropriate actions to 
address it.  An example of this case, is if the user put 3 data nodes into 
maintenance mode and switched them off.
+
+The transient failure can violate the availability guarantees of Ozone; Since 
the user is telling us not to take corrective actions. Many times, the user 
does not understand the impact on availability while asking Ozone to ignore the 
failure.
+
+So this paper proposes the following definitions for Decommission and 
Maintenance of data nodes.
+
+__Decommission__ of a data node is deemed to be complete when SCM/Ozone 
completes the replica of all containers on decommissioned data node to other 
data nodes.That is, the expected count matches the healthy count of containers 
in the cluster.
+

[GitHub] [hadoop] sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: decommissioning in Ozone

2019-07-31 Thread GitBox
sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: 
decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#discussion_r309328417
 
 

 ##
 File path: hadoop-hdds/docs/content/design/decommissioning.md
 ##
 @@ -0,0 +1,720 @@
+---
+title: Decommissioning in Ozone
+summary: Formal process to shut down machines in a safe way after the required 
replications.
+date: 2019-07-31
+jira: HDDS-1881
+status: current
+author: Anu Engineer, Marton Elek, Stephen O'Donnell 
+---
+
+
+# Abstract 
+
+The goal of decommissioning is to turn off a selected set of machines without 
data loss. It may or may not require to move the existing replicas of the 
containers to other nodes.
+
+There are two main classes of the decommissioning:
+
+ * __Maintenance mode__: where the node is expected to be back after a while. 
It may not require replication of containers if enough replicas are available 
from other nodes (as we expect to have the current replicas after the restart.)
+
+ * __Decommissioning__: where the node won't be started again. All the data 
should be replicated according to the current replication rules.
+
+Goals:
+
+ * Decommissioning can be canceled any time
+ * The progress of the decommissioning should be trackable
+ * The nodes under decommissioning / maintenance mode should not been used for 
new pipelines / containers
+ * The state of the datanodes should be persisted / replicated by the SCM (in 
HDFS the decommissioning info exclude/include lists are replicated manually by 
the admin). If datanode is marked for decommissioning this state be available 
after SCM and/or Datanode restarts.  
+ * We need to support validations before decommissioing (but the violations 
can be ignored by the admin).
+ * The administrator should be notified when a node can be turned off.
+ * The maintenance mode can be time constrained: if the node marked for 
maintenance for 1 week and the node is not up after one week, the containers 
should be considered as lost (DEAD node) and should be replicated.
+
+# Introduction
+
+Ozone is a highly available file system that relies on commodity hardware. In 
other words, Ozone is designed to handle failures of these nodes all the time.
+
+The Storage Container Manager(SCM) is designed to monitor the node health and 
replicate blocks and containers as needed.
+
+At times, Operators of the cluster can help the SCM by giving it hints. When 
removing a datanode, the operator can provide a hint. That is, a planned 
failure of the node is coming up, and SCM can make sure it reaches a safe state 
to handle this planned failure.
+
+Some times, this failure is transient; that is, the operator is taking down 
this node temporarily. In that case, we can live with lower replica counts by 
being optimistic.
+
+Both of these operations, __Maintenance__, and __Decommissioning__ are similar 
from the Replication point of view. In both cases, and the user instructs us on 
how to handle an upcoming failure.
+
+Today, SCM (*Replication Manager* component inside SCM) understands only one 
form of failure handling. This paper extends Replica Manager failure modes to 
allow users to request which failure handling model to be adopted(Optimistic or 
Pessimistic).
+
+Based on physical realities, there are two responses to any perceived failure, 
to heal the system by taking corrective actions or ignore the failure since the 
actions in the future will heal the system automatically.
+
+## User Experiences (Decommissioning vs Maintenance mode)
+
+From the user's point of view, there are two kinds of planned failures that 
the user would like to communicate to Ozone.
+
+The first kind is when a 'real' failure is going to happen in the future. This 
'real' failure is the act of decommissioning. We denote this as "decommission" 
throughout this paper. The response that the user wants is SCM/Ozone to make 
replicas to deal with the planned failure.
+
+The second kind is when the failure is 'transient.' The user knows that this 
failure is temporary and cluster in most cases can safely ignore this issue. 
However, if the transient failures are going to cause a failure of 
availability; then the user would like the Ozone to take appropriate actions to 
address it.  An example of this case, is if the user put 3 data nodes into 
maintenance mode and switched them off.
+
+The transient failure can violate the availability guarantees of Ozone; Since 
the user is telling us not to take corrective actions. Many times, the user 
does not understand the impact on availability while asking Ozone to ignore the 
failure.
+
+So this paper proposes the following definitions for Decommission and 
Maintenance of data nodes.
+
+__Decommission__ of a data node is deemed to be complete when SCM/Ozone 
completes the replica of all containers on decommissioned data node to other 
data nodes.That is, the expected count matches the healthy count of containers 
in the cluster.
+

[GitHub] [hadoop] sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: decommissioning in Ozone

2019-07-31 Thread GitBox
sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: 
decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#discussion_r309327167
 
 

 ##
 File path: hadoop-hdds/docs/content/design/decommissioning.md
 ##
 @@ -0,0 +1,720 @@
+---
+title: Decommissioning in Ozone
+summary: Formal process to shut down machines in a safe way after the required 
replications.
+date: 2019-07-31
+jira: HDDS-1881
+status: current
+author: Anu Engineer, Marton Elek, Stephen O'Donnell 
+---
+
+
+# Abstract 
+
+The goal of decommissioning is to turn off a selected set of machines without 
data loss. It may or may not require to move the existing replicas of the 
containers to other nodes.
+
+There are two main classes of the decommissioning:
+
+ * __Maintenance mode__: where the node is expected to be back after a while. 
It may not require replication of containers if enough replicas are available 
from other nodes (as we expect to have the current replicas after the restart.)
+
+ * __Decommissioning__: where the node won't be started again. All the data 
should be replicated according to the current replication rules.
+
+Goals:
+
+ * Decommissioning can be canceled any time
+ * The progress of the decommissioning should be trackable
+ * The nodes under decommissioning / maintenance mode should not been used for 
new pipelines / containers
+ * The state of the datanodes should be persisted / replicated by the SCM (in 
HDFS the decommissioning info exclude/include lists are replicated manually by 
the admin). If datanode is marked for decommissioning this state be available 
after SCM and/or Datanode restarts.  
+ * We need to support validations before decommissioing (but the violations 
can be ignored by the admin).
+ * The administrator should be notified when a node can be turned off.
+ * The maintenance mode can be time constrained: if the node marked for 
maintenance for 1 week and the node is not up after one week, the containers 
should be considered as lost (DEAD node) and should be replicated.
+
+# Introduction
+
+Ozone is a highly available file system that relies on commodity hardware. In 
other words, Ozone is designed to handle failures of these nodes all the time.
+
+The Storage Container Manager(SCM) is designed to monitor the node health and 
replicate blocks and containers as needed.
+
+At times, Operators of the cluster can help the SCM by giving it hints. When 
removing a datanode, the operator can provide a hint. That is, a planned 
failure of the node is coming up, and SCM can make sure it reaches a safe state 
to handle this planned failure.
+
+Some times, this failure is transient; that is, the operator is taking down 
this node temporarily. In that case, we can live with lower replica counts by 
being optimistic.
+
+Both of these operations, __Maintenance__, and __Decommissioning__ are similar 
from the Replication point of view. In both cases, and the user instructs us on 
how to handle an upcoming failure.
+
+Today, SCM (*Replication Manager* component inside SCM) understands only one 
form of failure handling. This paper extends Replica Manager failure modes to 
allow users to request which failure handling model to be adopted(Optimistic or 
Pessimistic).
+
+Based on physical realities, there are two responses to any perceived failure, 
to heal the system by taking corrective actions or ignore the failure since the 
actions in the future will heal the system automatically.
+
+## User Experiences (Decommissioning vs Maintenance mode)
+
+From the user's point of view, there are two kinds of planned failures that 
the user would like to communicate to Ozone.
+
+The first kind is when a 'real' failure is going to happen in the future. This 
'real' failure is the act of decommissioning. We denote this as "decommission" 
throughout this paper. The response that the user wants is SCM/Ozone to make 
replicas to deal with the planned failure.
+
+The second kind is when the failure is 'transient.' The user knows that this 
failure is temporary and cluster in most cases can safely ignore this issue. 
However, if the transient failures are going to cause a failure of 
availability; then the user would like the Ozone to take appropriate actions to 
address it.  An example of this case, is if the user put 3 data nodes into 
maintenance mode and switched them off.
+
+The transient failure can violate the availability guarantees of Ozone; Since 
the user is telling us not to take corrective actions. Many times, the user 
does not understand the impact on availability while asking Ozone to ignore the 
failure.
+
+So this paper proposes the following definitions for Decommission and 
Maintenance of data nodes.
+
+__Decommission__ of a data node is deemed to be complete when SCM/Ozone 
completes the replica of all containers on decommissioned data node to other 
data nodes.That is, the expected count matches the healthy count of containers 
in the cluster.
+

[GitHub] [hadoop] sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: decommissioning in Ozone

2019-07-31 Thread GitBox
sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: 
decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#discussion_r309327167
 
 

 ##
 File path: hadoop-hdds/docs/content/design/decommissioning.md
 ##
 @@ -0,0 +1,720 @@
+---
+title: Decommissioning in Ozone
+summary: Formal process to shut down machines in a safe way after the required 
replications.
+date: 2019-07-31
+jira: HDDS-1881
+status: current
+author: Anu Engineer, Marton Elek, Stephen O'Donnell 
+---
+
+
+# Abstract 
+
+The goal of decommissioning is to turn off a selected set of machines without 
data loss. It may or may not require to move the existing replicas of the 
containers to other nodes.
+
+There are two main classes of the decommissioning:
+
+ * __Maintenance mode__: where the node is expected to be back after a while. 
It may not require replication of containers if enough replicas are available 
from other nodes (as we expect to have the current replicas after the restart.)
+
+ * __Decommissioning__: where the node won't be started again. All the data 
should be replicated according to the current replication rules.
+
+Goals:
+
+ * Decommissioning can be canceled any time
+ * The progress of the decommissioning should be trackable
+ * The nodes under decommissioning / maintenance mode should not been used for 
new pipelines / containers
+ * The state of the datanodes should be persisted / replicated by the SCM (in 
HDFS the decommissioning info exclude/include lists are replicated manually by 
the admin). If datanode is marked for decommissioning this state be available 
after SCM and/or Datanode restarts.  
+ * We need to support validations before decommissioing (but the violations 
can be ignored by the admin).
+ * The administrator should be notified when a node can be turned off.
+ * The maintenance mode can be time constrained: if the node marked for 
maintenance for 1 week and the node is not up after one week, the containers 
should be considered as lost (DEAD node) and should be replicated.
+
+# Introduction
+
+Ozone is a highly available file system that relies on commodity hardware. In 
other words, Ozone is designed to handle failures of these nodes all the time.
+
+The Storage Container Manager(SCM) is designed to monitor the node health and 
replicate blocks and containers as needed.
+
+At times, Operators of the cluster can help the SCM by giving it hints. When 
removing a datanode, the operator can provide a hint. That is, a planned 
failure of the node is coming up, and SCM can make sure it reaches a safe state 
to handle this planned failure.
+
+Some times, this failure is transient; that is, the operator is taking down 
this node temporarily. In that case, we can live with lower replica counts by 
being optimistic.
+
+Both of these operations, __Maintenance__, and __Decommissioning__ are similar 
from the Replication point of view. In both cases, and the user instructs us on 
how to handle an upcoming failure.
+
+Today, SCM (*Replication Manager* component inside SCM) understands only one 
form of failure handling. This paper extends Replica Manager failure modes to 
allow users to request which failure handling model to be adopted(Optimistic or 
Pessimistic).
+
+Based on physical realities, there are two responses to any perceived failure, 
to heal the system by taking corrective actions or ignore the failure since the 
actions in the future will heal the system automatically.
+
+## User Experiences (Decommissioning vs Maintenance mode)
+
+From the user's point of view, there are two kinds of planned failures that 
the user would like to communicate to Ozone.
+
+The first kind is when a 'real' failure is going to happen in the future. This 
'real' failure is the act of decommissioning. We denote this as "decommission" 
throughout this paper. The response that the user wants is SCM/Ozone to make 
replicas to deal with the planned failure.
+
+The second kind is when the failure is 'transient.' The user knows that this 
failure is temporary and cluster in most cases can safely ignore this issue. 
However, if the transient failures are going to cause a failure of 
availability; then the user would like the Ozone to take appropriate actions to 
address it.  An example of this case, is if the user put 3 data nodes into 
maintenance mode and switched them off.
+
+The transient failure can violate the availability guarantees of Ozone; Since 
the user is telling us not to take corrective actions. Many times, the user 
does not understand the impact on availability while asking Ozone to ignore the 
failure.
+
+So this paper proposes the following definitions for Decommission and 
Maintenance of data nodes.
+
+__Decommission__ of a data node is deemed to be complete when SCM/Ozone 
completes the replica of all containers on decommissioned data node to other 
data nodes.That is, the expected count matches the healthy count of containers 
in the cluster.
+

[GitHub] [hadoop] sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: decommissioning in Ozone

2019-07-31 Thread GitBox
sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: 
decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#discussion_r309325349
 
 

 ##
 File path: hadoop-hdds/docs/content/design/decommissioning.md
 ##
 @@ -0,0 +1,720 @@
+---
+title: Decommissioning in Ozone
+summary: Formal process to shut down machines in a safe way after the required 
replications.
+date: 2019-07-31
+jira: HDDS-1881
+status: current
+author: Anu Engineer, Marton Elek, Stephen O'Donnell 
+---
+
+
+# Abstract 
+
+The goal of decommissioning is to turn off a selected set of machines without 
data loss. It may or may not require to move the existing replicas of the 
containers to other nodes.
+
+There are two main classes of the decommissioning:
+
+ * __Maintenance mode__: where the node is expected to be back after a while. 
It may not require replication of containers if enough replicas are available 
from other nodes (as we expect to have the current replicas after the restart.)
+
+ * __Decommissioning__: where the node won't be started again. All the data 
should be replicated according to the current replication rules.
+
+Goals:
+
+ * Decommissioning can be canceled any time
+ * The progress of the decommissioning should be trackable
+ * The nodes under decommissioning / maintenance mode should not been used for 
new pipelines / containers
+ * The state of the datanodes should be persisted / replicated by the SCM (in 
HDFS the decommissioning info exclude/include lists are replicated manually by 
the admin). If datanode is marked for decommissioning this state be available 
after SCM and/or Datanode restarts.  
+ * We need to support validations before decommissioing (but the violations 
can be ignored by the admin).
+ * The administrator should be notified when a node can be turned off.
+ * The maintenance mode can be time constrained: if the node marked for 
maintenance for 1 week and the node is not up after one week, the containers 
should be considered as lost (DEAD node) and should be replicated.
+
+# Introduction
+
+Ozone is a highly available file system that relies on commodity hardware. In 
other words, Ozone is designed to handle failures of these nodes all the time.
+
+The Storage Container Manager(SCM) is designed to monitor the node health and 
replicate blocks and containers as needed.
+
+At times, Operators of the cluster can help the SCM by giving it hints. When 
removing a datanode, the operator can provide a hint. That is, a planned 
failure of the node is coming up, and SCM can make sure it reaches a safe state 
to handle this planned failure.
+
+Some times, this failure is transient; that is, the operator is taking down 
this node temporarily. In that case, we can live with lower replica counts by 
being optimistic.
+
+Both of these operations, __Maintenance__, and __Decommissioning__ are similar 
from the Replication point of view. In both cases, and the user instructs us on 
how to handle an upcoming failure.
+
+Today, SCM (*Replication Manager* component inside SCM) understands only one 
form of failure handling. This paper extends Replica Manager failure modes to 
allow users to request which failure handling model to be adopted(Optimistic or 
Pessimistic).
+
+Based on physical realities, there are two responses to any perceived failure, 
to heal the system by taking corrective actions or ignore the failure since the 
actions in the future will heal the system automatically.
+
+## User Experiences (Decommissioning vs Maintenance mode)
+
+From the user's point of view, there are two kinds of planned failures that 
the user would like to communicate to Ozone.
+
+The first kind is when a 'real' failure is going to happen in the future. This 
'real' failure is the act of decommissioning. We denote this as "decommission" 
throughout this paper. The response that the user wants is SCM/Ozone to make 
replicas to deal with the planned failure.
+
+The second kind is when the failure is 'transient.' The user knows that this 
failure is temporary and cluster in most cases can safely ignore this issue. 
However, if the transient failures are going to cause a failure of 
availability; then the user would like the Ozone to take appropriate actions to 
address it.  An example of this case, is if the user put 3 data nodes into 
maintenance mode and switched them off.
+
+The transient failure can violate the availability guarantees of Ozone; Since 
the user is telling us not to take corrective actions. Many times, the user 
does not understand the impact on availability while asking Ozone to ignore the 
failure.
+
+So this paper proposes the following definitions for Decommission and 
Maintenance of data nodes.
+
+__Decommission__ of a data node is deemed to be complete when SCM/Ozone 
completes the replica of all containers on decommissioned data node to other 
data nodes.That is, the expected count matches the healthy count of containers 
in the cluster.
+

[GitHub] [hadoop] arp7 merged pull request #1188: HDDS-1875. Fix failures in TestS3MultipartUploadAbortResponse.

2019-07-31 Thread GitBox
arp7 merged pull request #1188: HDDS-1875. Fix failures in 
TestS3MultipartUploadAbortResponse.
URL: https://github.com/apache/hadoop/pull/1188
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16479) FileStatus.getModificationTime returns localized time instead of UTC

2019-07-31 Thread Joan Sala Reixach (JIRA)
Joan Sala Reixach created HADOOP-16479:
--

 Summary: FileStatus.getModificationTime returns localized time 
instead of UTC
 Key: HADOOP-16479
 URL: https://issues.apache.org/jira/browse/HADOOP-16479
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Joan Sala Reixach
 Attachments: image-2019-07-31-18-21-53-023.png, 
image-2019-07-31-18-23-37-349.png

As per javadoc, the method FileStatus.getModificationTime() should return the 
time in UTC, but it returns the time in the JVM timezone.

The issue origins in AzureBlobFileSystemStore.getFileStatus() itself, since  
parseLastModifiedTime() returns a wrong date. I have created a file in Azure 
Data Lake Gen2 and when I look at  it through the Azure Explorer it shows the 
correct modification time, but the method returns -2 hours time (I am in CET = 
UTC+2).

Azure Explorer last modified time:

!image-2019-07-31-18-21-53-023.png|width=460,height=45!

AbfsClient parseLastModifiedTime:

!image-2019-07-31-18-23-37-349.png|width=459,height=284!

It shows 15:21 CEST as utcDate, when it should be 15:21 UTC, which results in 
the 2 hour loss.

DateFormat.parse uses a localized calendar to parse dates which might be the 
source of the issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: decommissioning in Ozone

2019-07-31 Thread GitBox
sodonnel commented on a change in pull request #1196: HDDS-1881. Design doc: 
decommissioning in Ozone
URL: https://github.com/apache/hadoop/pull/1196#discussion_r309314790
 
 

 ##
 File path: hadoop-hdds/docs/content/design/decommissioning.md
 ##
 @@ -0,0 +1,720 @@
+---
+title: Decommissioning in Ozone
+summary: Formal process to shut down machines in a safe way after the required 
replications.
+date: 2019-07-31
+jira: HDDS-1881
+status: current
+author: Anu Engineer, Marton Elek, Stephen O'Donnell 
+---
+
+
+# Abstract 
+
+The goal of decommissioning is to turn off a selected set of machines without 
data loss. It may or may not require to move the existing replicas of the 
containers to other nodes.
+
+There are two main classes of the decommissioning:
+
+ * __Maintenance mode__: where the node is expected to be back after a while. 
It may not require replication of containers if enough replicas are available 
from other nodes (as we expect to have the current replicas after the restart.)
+
+ * __Decommissioning__: where the node won't be started again. All the data 
should be replicated according to the current replication rules.
+
+Goals:
+
+ * Decommissioning can be canceled any time
+ * The progress of the decommissioning should be trackable
+ * The nodes under decommissioning / maintenance mode should not been used for 
new pipelines / containers
+ * The state of the datanodes should be persisted / replicated by the SCM (in 
HDFS the decommissioning info exclude/include lists are replicated manually by 
the admin). If datanode is marked for decommissioning this state be available 
after SCM and/or Datanode restarts.  
+ * We need to support validations before decommissioing (but the violations 
can be ignored by the admin).
+ * The administrator should be notified when a node can be turned off.
+ * The maintenance mode can be time constrained: if the node marked for 
maintenance for 1 week and the node is not up after one week, the containers 
should be considered as lost (DEAD node) and should be replicated.
+
+# Introduction
+
+Ozone is a highly available file system that relies on commodity hardware. In 
other words, Ozone is designed to handle failures of these nodes all the time.
+
+The Storage Container Manager(SCM) is designed to monitor the node health and 
replicate blocks and containers as needed.
+
+At times, Operators of the cluster can help the SCM by giving it hints. When 
removing a datanode, the operator can provide a hint. That is, a planned 
failure of the node is coming up, and SCM can make sure it reaches a safe state 
to handle this planned failure.
+
+Some times, this failure is transient; that is, the operator is taking down 
this node temporarily. In that case, we can live with lower replica counts by 
being optimistic.
+
+Both of these operations, __Maintenance__, and __Decommissioning__ are similar 
from the Replication point of view. In both cases, and the user instructs us on 
how to handle an upcoming failure.
+
+Today, SCM (*Replication Manager* component inside SCM) understands only one 
form of failure handling. This paper extends Replica Manager failure modes to 
allow users to request which failure handling model to be adopted(Optimistic or 
Pessimistic).
+
+Based on physical realities, there are two responses to any perceived failure, 
to heal the system by taking corrective actions or ignore the failure since the 
actions in the future will heal the system automatically.
+
+## User Experiences (Decommissioning vs Maintenance mode)
+
+From the user's point of view, there are two kinds of planned failures that 
the user would like to communicate to Ozone.
+
+The first kind is when a 'real' failure is going to happen in the future. This 
'real' failure is the act of decommissioning. We denote this as "decommission" 
throughout this paper. The response that the user wants is SCM/Ozone to make 
replicas to deal with the planned failure.
+
+The second kind is when the failure is 'transient.' The user knows that this 
failure is temporary and cluster in most cases can safely ignore this issue. 
However, if the transient failures are going to cause a failure of 
availability; then the user would like the Ozone to take appropriate actions to 
address it.  An example of this case, is if the user put 3 data nodes into 
maintenance mode and switched them off.
+
+The transient failure can violate the availability guarantees of Ozone; Since 
the user is telling us not to take corrective actions. Many times, the user 
does not understand the impact on availability while asking Ozone to ignore the 
failure.
+
+So this paper proposes the following definitions for Decommission and 
Maintenance of data nodes.
+
+__Decommission__ of a data node is deemed to be complete when SCM/Ozone 
completes the replica of all containers on decommissioned data node to other 
data nodes.That is, the expected count matches the healthy count of containers 
in the cluster.
+

[GitHub] [hadoop] bharatviswa504 commented on issue #1187: HDDS-1829 On OM reload/restart OmMetrics#numKeys should be updated

2019-07-31 Thread GitBox
bharatviswa504 commented on issue #1187: HDDS-1829 On OM reload/restart 
OmMetrics#numKeys should be updated
URL: https://github.com/apache/hadoop/pull/1187#issuecomment-516921673
 
 
   @smengcl to get CI run we need to label PR as ozone.
   
   To get a label, just post a comment /label ozone.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1160: HADOOP-16458 LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3

2019-07-31 Thread GitBox
hadoop-yetus commented on issue #1160: HADOOP-16458 
LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3
URL: https://github.com/apache/hadoop/pull/1160#issuecomment-516919826
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 48 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1142 | trunk passed |
   | +1 | compile | 1162 | trunk passed |
   | +1 | checkstyle | 153 | trunk passed |
   | +1 | mvnsite | 173 | trunk passed |
   | +1 | shadedclient | 1046 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 138 | trunk passed |
   | 0 | spotbugs | 72 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | -1 | findbugs | 70 | hadoop-tools/hadoop-aws in trunk has 1 extant 
findbugs warnings. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 112 | the patch passed |
   | +1 | compile | 1000 | the patch passed |
   | +1 | javac | 1000 | the patch passed |
   | -0 | checkstyle | 147 | root: The patch generated 1 new + 232 unchanged - 
8 fixed = 233 total (was 240) |
   | +1 | mvnsite | 188 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 141 | the patch passed |
   | -1 | findbugs | 132 | hadoop-common-project/hadoop-common generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 536 | hadoop-common in the patch passed. |
   | +1 | unit | 340 | hadoop-mapreduce-client-core in the patch passed. |
   | +1 | unit | 291 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 7967 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Null pointer dereference of Globber.fs in new 
org.apache.hadoop.fs.Globber(FileContext, Path, PathFilter, boolean)  
Dereferenced at Globber.java:in new org.apache.hadoop.fs.Globber(FileContext, 
Path, PathFilter, boolean)  Dereferenced at Globber.java:[line 105] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1160 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 690a6441d144 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ac8ed7b |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/5/artifact/out/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/5/artifact/out/diff-checkstyle-root.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/5/artifact/out/new-findbugs-hadoop-common-project_hadoop-common.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/5/testReport/ |
   | Max. process+thread count | 1606 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-tools/hadoop-aws U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1181: HDDS-1849. Implement S3 Complete MPU request to use Cache and DoubleBuffer.

2019-07-31 Thread GitBox
bharatviswa504 merged pull request #1181: HDDS-1849. Implement S3 Complete MPU 
request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1181
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1181: HDDS-1849. Implement S3 Complete MPU request to use Cache and DoubleBuffer.

2019-07-31 Thread GitBox
bharatviswa504 commented on issue #1181: HDDS-1849. Implement S3 Complete MPU 
request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1181#issuecomment-516919501
 
 
   TestOzoneManagerDoubleBufferWithOMResponse is passing locally. Other test 
failures are not related to this patch.
   Thank You @arp7 for the review.
   I will commit this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-07-31 Thread GitBox
bharatviswa504 commented on issue #877: HDDS-1618. Merge code for HA and Non-HA 
OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#issuecomment-516918048
 
 
   This is taken care as part of HDDS-1856.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 closed pull request #877: HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket

2019-07-31 Thread GitBox
bharatviswa504 closed pull request #877: HDDS-1618. Merge code for HA and 
Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller

2019-07-31 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897291#comment-16897291
 ] 

Steve Loughran edited comment on HADOOP-16478 at 7/31/19 3:57 PM:
--

{code}
java.nio.file.AccessDeniedException: mow-dev-istio-west-demo: 
getBucketLocation() on s3a://restricted: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
030653A1119B53A7; S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=), 
S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=:AccessDenied
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:243)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:314)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:406)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:310)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:285)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:716)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:703)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:1185)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:401)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1672)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1681)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied 
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
030653A1119B53A7; S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=), 
S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4860)
at 
com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:999)
at 
com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:1005)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getBucketLocation$3(S3AFileSystem.java:717)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
... 11 more
{code}


was (Author: ste...@apache.org):
{code}
java.nio.file.AccessDeniedException: mow-dev-istio-west-demo: 
getBucketLocation() on s3a://restricted: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
030653A1119B53A7; S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=), 
S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=:AccessDenied
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:243)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:314)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:406)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:310)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:285)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:716)
at 

[jira] [Commented] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller

2019-07-31 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897291#comment-16897291
 ] 

Steve Loughran commented on HADOOP-16478:
-

{code}
java.nio.file.AccessDeniedException: mow-dev-istio-west-demo: 
getBucketLocation() on s3a://restricted: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
030653A1119B53A7; S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=), 
S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=:AccessDenied
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:243)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:314)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:406)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:310)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:285)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:716)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:703)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:1185)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:401)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1672)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1681)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied 
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
030653A1119B53A7; S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=), 
S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4860)
at 
com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:999)
at 
com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:1005)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getBucketLocation$3(S3AFileSystem.java:717)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
... 11 more


> S3Guard bucket-info fails if the bucket location is denied to the caller
> 
>
> Key: HADOOP-16478
> URL: https://issues.apache.org/jira/browse/HADOOP-16478
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> IF you call "Hadoop s3guard bucket info" on a bucket and you don't have 
> permission to list the bucket location, then you get a stack trace, with all 
> other diagnostics being missing.
> Preferred: catch the exception, warn its unknown and only log@ debug



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller

2019-07-31 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16478:
---

 Summary: S3Guard bucket-info fails if the bucket location is 
denied to the caller
 Key: HADOOP-16478
 URL: https://issues.apache.org/jira/browse/HADOOP-16478
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran


IF you call "Hadoop s3guard bucket info" on a bucket and you don't have 
permission to list the bucket location, then you get a stack trace, with all 
other diagnostics being missing.

Preferred: catch the exception, warn its unknown and only log@ debug



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16403) Start a new statistical rpc queue and make the Reader's pendingConnection queue runtime-replaceable

2019-07-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897287#comment-16897287
 ] 

Hadoop QA commented on HADOOP-16403:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 17m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 15s{color} | {color:orange} root: The patch generated 3 new + 545 unchanged 
- 10 fixed = 548 total (was 555) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m 
18s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}190m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16403 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976332/HADOOP-16403.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  xml  |
| uname | Linux e8f26cf51140 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 

  1   2   3   4   >