[GitHub] [hadoop] bharatviswa504 opened a new pull request #949: OzoneManager Lock change the volumeLock weight to 0

2019-06-11 Thread GitBox
bharatviswa504 opened a new pull request #949: OzoneManager Lock change the 
volumeLock weight to 0
URL: https://github.com/apache/hadoop/pull/949
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #927: HDDS-1543. Implement addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #927: HDDS-1543. Implement 
addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…
URL: https://github.com/apache/hadoop/pull/927#issuecomment-501127935
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | +1 | mvninstall | 514 | trunk passed |
   | +1 | compile | 319 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 916 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 414 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 608 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 465 | the patch passed |
   | +1 | compile | 325 | the patch passed |
   | +1 | cc | 325 | the patch passed |
   | +1 | javac | 325 | the patch passed |
   | +1 | checkstyle | 98 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 846 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 199 | the patch passed |
   | +1 | findbugs | 574 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 184 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1300 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 6975 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/927 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux e9ebbe5d3976 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4ea6c2f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/5/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/5/testReport/ |
   | Max. process+thread count | 4452 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #927: HDDS-1543. Implement addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #927: HDDS-1543. Implement 
addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…
URL: https://github.com/apache/hadoop/pull/927#issuecomment-501126871
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 50 | Maven dependency ordering for branch |
   | +1 | mvninstall | 517 | trunk passed |
   | +1 | compile | 282 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 952 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 379 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 593 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 462 | the patch passed |
   | +1 | compile | 288 | the patch passed |
   | +1 | cc | 288 | the patch passed |
   | +1 | javac | 288 | the patch passed |
   | +1 | checkstyle | 83 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | the patch passed |
   | +1 | findbugs | 543 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 163 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1176 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 6531 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/927 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 4b38b4a2239a 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4ea6c2f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/6/testReport/ |
   | Max. process+thread count | 4999 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #942: HDDS-1587. Support dynamically adding delegated classes from to isolated class loader.

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #942: HDDS-1587. Support dynamically adding 
delegated classes from to isolated class loader.
URL: https://github.com/apache/hadoop/pull/942#issuecomment-501124605
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 48 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | +1 | mvninstall | 567 | trunk passed |
   | +1 | compile | 319 | trunk passed |
   | +1 | checkstyle | 97 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 862 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 207 | trunk passed |
   | 0 | spotbugs | 350 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 555 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 510 | the patch passed |
   | +1 | compile | 357 | the patch passed |
   | +1 | javac | 357 | the patch passed |
   | -0 | checkstyle | 54 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 24 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 729 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | the patch passed |
   | +1 | findbugs | 599 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 206 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1402 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 72 | The patch does not generate ASF License warnings. |
   | | | 7205 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-942/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/942 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
compile javac javadoc mvninstall shadedclient findbugs checkstyle |
   | uname | Linux e94eb1dffd15 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4ea6c2f |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-942/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-942/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-942/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-942/2/testReport/ |
   | Max. process+thread count | 5290 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozonefs U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-942/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #933: HDDS-1662.Missing test resources of integrataion-test project in targ…

2019-06-11 Thread GitBox
elek commented on issue #933: HDDS-1662.Missing test resources of 
integrataion-test project in targ…
URL: https://github.com/apache/hadoop/pull/933#issuecomment-501121830
 
 
   No problem. Sorry to not wait for enough time. Usually I prefer to wait at 
least one day to give the chance to all the time zones to comment it (I am 
GMT+1/+2)
   
   But this was a small fix and it's blocked my CI comments. And I am ready to 
improve it if you have any further comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292743993
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,200 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest
+implements OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.DeleteVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+String owner = null;
+
+omMetadataManager.getLock().acquireVolumeLock(volume);
+try {
+  owner = getVolumeInfo(omMetadataManager, volume).getOwnerName();
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume 
Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292743782
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,200 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest
+implements OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.DeleteVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+String owner = null;
+
+omMetadataManager.getLock().acquireVolumeLock(volume);
+try {
+  owner = getVolumeInfo(omMetadataManager, volume).getOwnerName();
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume 
Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292743782
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,200 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest
+implements OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.DeleteVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+String owner = null;
+
+omMetadataManager.getLock().acquireVolumeLock(volume);
+try {
+  owner = getVolumeInfo(omMetadataManager, volume).getOwnerName();
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292743619
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,200 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest
+implements OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.DeleteVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+String owner = null;
+
+omMetadataManager.getLock().acquireVolumeLock(volume);
+try {
+  owner = getVolumeInfo(omMetadataManager, volume).getOwnerName();
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292743459
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,200 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest
+implements OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.DeleteVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+String owner = null;
+
+omMetadataManager.getLock().acquireVolumeLock(volume);
+try {
+  owner = getVolumeInfo(omMetadataManager, volume).getOwnerName();
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292743150
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,200 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest
+implements OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.DeleteVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+String owner = null;
+
+omMetadataManager.getLock().acquireVolumeLock(volume);
+try {
+  owner = getVolumeInfo(omMetadataManager, volume).getOwnerName();
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume 
Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292743123
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,200 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest
+implements OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.DeleteVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+String owner = null;
+
+omMetadataManager.getLock().acquireVolumeLock(volume);
+try {
+  owner = getVolumeInfo(omMetadataManager, volume).getOwnerName();
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292742658
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -451,6 +456,21 @@ private boolean startsWith(byte[] firstArray, byte[] 
secondArray) {
   public boolean isVolumeEmpty(String volume) throws IOException {
 String volumePrefix = getVolumeKey(volume + OM_KEY_PREFIX);
 
+if (bucketTable instanceof TypedTable) {
 
 Review comment:
   Okay, will update it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292742716
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,200 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest
+implements OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.DeleteVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+String owner = null;
+
+omMetadataManager.getLock().acquireVolumeLock(volume);
+try {
+  owner = getVolumeInfo(omMetadataManager, volume).getOwnerName();
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume 
Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292740663
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,200 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest
+implements OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.DeleteVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+String owner = null;
+
+omMetadataManager.getLock().acquireVolumeLock(volume);
+try {
+  owner = getVolumeInfo(omMetadataManager, volume).getOwnerName();
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume 
Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292741029
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -451,6 +456,21 @@ private boolean startsWith(byte[] firstArray, byte[] 
secondArray) {
   public boolean isVolumeEmpty(String volume) throws IOException {
 String volumePrefix = getVolumeKey(volume + OM_KEY_PREFIX);
 
+if (bucketTable instanceof TypedTable) {
 
 Review comment:
   > This is more like a safer check, in a case in future if someone changes 
the bucketTable to RDBTable
   
   But that will change all of our assumptions about how HA requests work 
correct? Perhaps it is safer if we fail it rather than ignore it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume 
Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292740477
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,200 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest
+implements OMVolumeRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.DeleteVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+String owner = null;
+
+omMetadataManager.getLock().acquireVolumeLock(volume);
+try {
+  owner = getVolumeInfo(omMetadataManager, volume).getOwnerName();
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292740740
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -451,6 +456,21 @@ private boolean startsWith(byte[] firstArray, byte[] 
secondArray) {
   public boolean isVolumeEmpty(String volume) throws IOException {
 String volumePrefix = getVolumeKey(volume + OM_KEY_PREFIX);
 
+if (bucketTable instanceof TypedTable) {
 
 Review comment:
   When bucketTable is not an instance of TypedTable. As only TypedTables have 
Cache.
   This is more like a safer check, in a case in future if someone changes the 
bucketTable to RDBTable, we don't need to do the logic inside this if check. 
Because  cacheIterator throws NotImplementedException for RDBTable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#issuecomment-501113223
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 508 | trunk passed |
   | +1 | compile | 300 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 961 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | trunk passed |
   | 0 | spotbugs | 377 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 598 | trunk passed |
   | -0 | patch | 429 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 470 | the patch passed |
   | +1 | compile | 311 | the patch passed |
   | +1 | cc | 311 | the patch passed |
   | -1 | javac | 209 | hadoop-ozone generated 4 new + 3 unchanged - 0 fixed = 
7 total (was 3) |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 760 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 196 | the patch passed |
   | +1 | findbugs | 630 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 176 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1890 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7461 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/884 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux d64606d540e7 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4ea6c2f |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/18/artifact/out/diff-compile-javac-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/18/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/18/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/18/testReport/ |
   | Max. process+thread count | 4102 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/18/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
arp7 commented on a change in pull request #884: HDDS-1620. Implement Volume 
Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r291757315
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -451,6 +456,21 @@ private boolean startsWith(byte[] firstArray, byte[] 
secondArray) {
   public boolean isVolumeEmpty(String volume) throws IOException {
 String volumePrefix = getVolumeKey(volume + OM_KEY_PREFIX);
 
+if (bucketTable instanceof TypedTable) {
 
 Review comment:
   When will this check be false?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#issuecomment-501112150
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 56 | Maven dependency ordering for branch |
   | +1 | mvninstall | 475 | trunk passed |
   | +1 | compile | 273 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 896 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | trunk passed |
   | 0 | spotbugs | 330 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 513 | trunk passed |
   | -0 | patch | 385 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 443 | the patch passed |
   | +1 | compile | 280 | the patch passed |
   | +1 | cc | 280 | the patch passed |
   | -1 | javac | 184 | hadoop-ozone generated 4 new + 3 unchanged - 0 fixed = 
7 total (was 3) |
   | +1 | checkstyle | 83 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 732 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | the patch passed |
   | -1 | findbugs | 115 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 182 | hadoop-hdds in the patch failed. |
   | -1 | unit | 114 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 5114 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/884 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux 44a9d0c11fc0 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4ea6c2f |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/20/artifact/out/diff-compile-javac-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/20/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/20/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/20/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/20/testReport/ |
   | Max. process+thread count | 352 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/20/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#issuecomment-50038
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 83 | Maven dependency ordering for branch |
   | +1 | mvninstall | 562 | trunk passed |
   | +1 | compile | 294 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 925 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | trunk passed |
   | 0 | spotbugs | 345 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 538 | trunk passed |
   | -0 | patch | 404 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | -1 | mvninstall | 148 | hadoop-ozone in the patch failed. |
   | -1 | compile | 56 | hadoop-ozone in the patch failed. |
   | -1 | cc | 56 | hadoop-ozone in the patch failed. |
   | -1 | javac | 56 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 712 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | -1 | findbugs | 107 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 178 | hadoop-hdds in the patch failed. |
   | -1 | unit | 118 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 4913 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/884 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux 8cb18aedf10c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4ea6c2f |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/19/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/19/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/19/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/19/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/19/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/19/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/19/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/19/testReport/ |
   | Max. process+thread count | 332 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/19/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-16365) Upgrade jackson-databind to 2.9.9

2019-06-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861731#comment-16861731
 ] 

Hadoop QA commented on HADOOP-16365:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
38m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16365 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12971525/HADOOP-16365.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 52e5f882a785 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4ea6c2f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16317/testReport/ |
| Max. process+thread count | 342 (vs. ulimit of 1) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16317/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Upgrade jackson-databind to 2.9.9
> -
>
> Key: HADOOP-16365
> URL: https://issues.apache.org/jira/browse/HADOOP-16365
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HADOOP-16365.001.patch, HADOOP-16365.002.patch
>
>
> Add 2.9.9 version of Jackson-databind 




[GitHub] [hadoop] hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#issuecomment-501109410
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 59 | Maven dependency ordering for branch |
   | +1 | mvninstall | 511 | trunk passed |
   | +1 | compile | 287 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 960 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   | 0 | spotbugs | 361 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 577 | trunk passed |
   | -0 | patch | 413 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 485 | the patch passed |
   | +1 | compile | 327 | the patch passed |
   | +1 | cc | 327 | the patch passed |
   | -1 | javac | 218 | hadoop-ozone generated 4 new + 3 unchanged - 0 fixed = 
7 total (was 3) |
   | -0 | checkstyle | 45 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 721 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 173 | the patch passed |
   | +1 | findbugs | 583 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 165 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1645 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 7134 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/884 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux 3d45043824b1 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4ea6c2f |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/17/artifact/out/diff-compile-javac-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/17/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/17/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/17/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/17/testReport/ |
   | Max. process+thread count | 5309 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/17/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

[jira] [Commented] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-11 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861720#comment-16861720
 ] 

Prabhu Joseph commented on HADOOP-16354:


Thanks [~eyang].

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch, HADOOP-16354-005.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] chenjunjiedada commented on a change in pull request #942: HDDS-1587. Support dynamically adding delegated classes from to isolated class loader.

2019-06-11 Thread GitBox
chenjunjiedada commented on a change in pull request #942: HDDS-1587. Support 
dynamically adding delegated classes from to isolated class loader.
URL: https://github.com/apache/hadoop/pull/942#discussion_r292732610
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/FilteredClassLoader.java
 ##
 @@ -57,6 +57,12 @@ public FilteredClassLoader(URL[] urls, ClassLoader parent) {
 
delegatedClasses.add("org.apache.hadoop.fs.ozone.OzoneFSStorageStatistics");
 delegatedClasses.add("org.apache.hadoop.fs.ozone.Statistic");
 delegatedClasses.add("org.apache.hadoop.fs.Seekable");
+String[] dynamicDelegatedClasses =
+System.getProperty("HADOOP_OZONE_DELEGATED_CLASSES").split(";");
+for (String delegatedClass : dynamicDelegatedClasses) {
+  delegatedClasses.add(delegatedClass);
+}
 
 Review comment:
   I see.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] chenjunjiedada commented on a change in pull request #942: HDDS-1587. Support dynamically adding delegated classes from to isolated class loader.

2019-06-11 Thread GitBox
chenjunjiedada commented on a change in pull request #942: HDDS-1587. Support 
dynamically adding delegated classes from to isolated class loader.
URL: https://github.com/apache/hadoop/pull/942#discussion_r292722838
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/FilteredClassLoader.java
 ##
 @@ -57,6 +57,12 @@ public FilteredClassLoader(URL[] urls, ClassLoader parent) {
 
delegatedClasses.add("org.apache.hadoop.fs.ozone.OzoneFSStorageStatistics");
 delegatedClasses.add("org.apache.hadoop.fs.ozone.Statistic");
 delegatedClasses.add("org.apache.hadoop.fs.Seekable");
+String[] dynamicDelegatedClasses =
+System.getProperty("HADOOP_OZONE_DELEGATED_CLASSES").split(";");
+for (String delegatedClass : dynamicDelegatedClasses) {
+  delegatedClasses.add(delegatedClass);
+}
 
 Review comment:
   getTrimmedStringCollection is member function of class Configuration. We 
can't directly use here. I can add trim() if we care about the space. Is that 
ok?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on issue #933: HDDS-1662.Missing test resources of integrataion-test project in targ…

2019-06-11 Thread GitBox
ChenSammi commented on issue #933: HDDS-1662.Missing test resources of 
integrataion-test project in targ…
URL: https://github.com/apache/hadoop/pull/933#issuecomment-501103242
 
 
   @elek,  You are right. We can use getResourceAsStream instead.  Thanks for 
improve the code to solve the issue. 
   I base in Shanghai. Sorry for miss the review. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16352) Fix hugo warnings in hadoop-site

2019-06-11 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861709#comment-16861709
 ] 

Akira Ajisaka edited comment on HADOOP-16352 at 6/12/19 3:02 AM:
-

+1, committed. Thanks!


was (Author: ajisakaa):
Committed. Thanks!

> Fix hugo warnings in hadoop-site
> 
>
> Key: HADOOP-16352
> URL: https://issues.apache.org/jira/browse/HADOOP-16352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, site
> Environment: Hugo v0.55.6
>Reporter: Akira Ajisaka
>Assignee: Wanqiang Ji
>Priority: Minor
>  Labels: newbie
> Fix For: asf-site
>
> Attachments: HADOOP-16352.001.patch
>
>
> https://github.com/apache/hadoop-site
> {noformat}
> $ hugo
> Building sites … WARN 2019/06/07 18:53:18 Page's .BaseFileName is deprecated 
> and will be removed in a future release. Use .File.BaseFileName.
> WARN 2019/06/07 18:53:18 Page's .URL is deprecated and will be removed in a 
> future release. Use .Permalink or .RelPermalink. If what you want is the 
> front matter URL value, use .Params.url.
> {noformat}
> {noformat}
> $ hugo
> Building sites … WARN 2019/06/11 23:15:54 Page's .URL is deprecated and will 
> be removed in a future release. 
> Use .Permalink or .RelPermalink. If what you want is the front matter URL 
> value, use .Params.url.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16352) Fix hugo warnings in hadoop-site

2019-06-11 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16352:
---
Fix Version/s: asf-site

> Fix hugo warnings in hadoop-site
> 
>
> Key: HADOOP-16352
> URL: https://issues.apache.org/jira/browse/HADOOP-16352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, site
> Environment: Hugo v0.55.6
>Reporter: Akira Ajisaka
>Assignee: Wanqiang Ji
>Priority: Minor
>  Labels: newbie
> Fix For: asf-site
>
> Attachments: HADOOP-16352.001.patch
>
>
> https://github.com/apache/hadoop-site
> {noformat}
> $ hugo
> Building sites … WARN 2019/06/07 18:53:18 Page's .BaseFileName is deprecated 
> and will be removed in a future release. Use .File.BaseFileName.
> WARN 2019/06/07 18:53:18 Page's .URL is deprecated and will be removed in a 
> future release. Use .Permalink or .RelPermalink. If what you want is the 
> front matter URL value, use .Params.url.
> {noformat}
> {noformat}
> $ hugo
> Building sites … WARN 2019/06/11 23:15:54 Page's .URL is deprecated and will 
> be removed in a future release. 
> Use .Permalink or .RelPermalink. If what you want is the front matter URL 
> value, use .Params.url.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16352) Fix hugo warnings in hadoop-site

2019-06-11 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16352:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed. Thanks!

> Fix hugo warnings in hadoop-site
> 
>
> Key: HADOOP-16352
> URL: https://issues.apache.org/jira/browse/HADOOP-16352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, site
> Environment: Hugo v0.55.6
>Reporter: Akira Ajisaka
>Assignee: Wanqiang Ji
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-16352.001.patch
>
>
> https://github.com/apache/hadoop-site
> {noformat}
> $ hugo
> Building sites … WARN 2019/06/07 18:53:18 Page's .BaseFileName is deprecated 
> and will be removed in a future release. Use .File.BaseFileName.
> WARN 2019/06/07 18:53:18 Page's .URL is deprecated and will be removed in a 
> future release. Use .Permalink or .RelPermalink. If what you want is the 
> front matter URL value, use .Params.url.
> {noformat}
> {noformat}
> $ hugo
> Building sites … WARN 2019/06/11 23:15:54 Page's .URL is deprecated and will 
> be removed in a future release. 
> Use .Permalink or .RelPermalink. If what you want is the front matter URL 
> value, use .Params.url.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16352) Fix hugo warnings in hadoop-site

2019-06-11 Thread Wanqiang Ji (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji updated HADOOP-16352:
-
Component/s: site

> Fix hugo warnings in hadoop-site
> 
>
> Key: HADOOP-16352
> URL: https://issues.apache.org/jira/browse/HADOOP-16352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, site
> Environment: Hugo v0.55.6
>Reporter: Akira Ajisaka
>Assignee: Wanqiang Ji
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-16352.001.patch
>
>
> https://github.com/apache/hadoop-site
> {noformat}
> $ hugo
> Building sites … WARN 2019/06/07 18:53:18 Page's .BaseFileName is deprecated 
> and will be removed in a future release. Use .File.BaseFileName.
> WARN 2019/06/07 18:53:18 Page's .URL is deprecated and will be removed in a 
> future release. Use .Permalink or .RelPermalink. If what you want is the 
> front matter URL value, use .Params.url.
> {noformat}
> {noformat}
> $ hugo
> Building sites … WARN 2019/06/11 23:15:54 Page's .URL is deprecated and will 
> be removed in a future release. 
> Use .Permalink or .RelPermalink. If what you want is the front matter URL 
> value, use .Params.url.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16112) Delete the baseTrashPath's subDir leads to don't modify baseTrashPath

2019-06-11 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861705#comment-16861705
 ] 

Lisheng Sun commented on HADOOP-16112:
--

[~hexiaoqiao] The current result should be not expected result.  Please correct 
me if I am wrong. I am confused about this issue. Thanks.

> Delete the baseTrashPath's subDir leads to don't modify baseTrashPath
> -
>
> Key: HADOOP-16112
> URL: https://issues.apache.org/jira/browse/HADOOP-16112
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16112.001.patch, HADOOP-16112.002.patch
>
>
> There is race condition in TrashPolicyDefault#moveToTrash
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16365) Upgrade jackson-databind to 2.9.9

2019-06-11 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861703#comment-16861703
 ] 

Wei-Chiu Chuang commented on HADOOP-16365:
--

+1 pending Jenkins

> Upgrade jackson-databind to 2.9.9
> -
>
> Key: HADOOP-16365
> URL: https://issues.apache.org/jira/browse/HADOOP-16365
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HADOOP-16365.001.patch, HADOOP-16365.002.patch
>
>
> Add 2.9.9 version of Jackson-databind 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16365) Upgrade jackson-databind to 2.9.9

2019-06-11 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861696#comment-16861696
 ] 

Shweta commented on HADOOP-16365:
-

[~jojochuang], thanks for the review. Uploaded patch v002. Please review.


> Upgrade jackson-databind to 2.9.9
> -
>
> Key: HADOOP-16365
> URL: https://issues.apache.org/jira/browse/HADOOP-16365
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HADOOP-16365.001.patch, HADOOP-16365.002.patch
>
>
> Add 2.9.9 version of Jackson-databind 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16365) Upgrade jackson-databind to 2.9.9

2019-06-11 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16365:

Attachment: HADOOP-16365.002.patch

> Upgrade jackson-databind to 2.9.9
> -
>
> Key: HADOOP-16365
> URL: https://issues.apache.org/jira/browse/HADOOP-16365
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HADOOP-16365.001.patch, HADOOP-16365.002.patch
>
>
> Add 2.9.9 version of Jackson-databind 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292727143
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -0,0 +1,188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeSetOwnerResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle set owner request for volume.
+ */
+public class OMVolumeSetOwnerRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeSetOwnerRequest.class);
+
+  public OMVolumeSetOwnerRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+SetVolumePropertyRequest setVolumePropertyRequest =
+getOmRequest().getSetVolumePropertyRequest();
+
+Preconditions.checkNotNull(setVolumePropertyRequest);
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.SetVolumeProperty).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+// In production this will never happen, this request will be called only
+// when we have ownerName in setVolumePropertyRequest.
+if (!setVolumePropertyRequest.hasOwnerName()) {
+  omResponse.setStatus(OzoneManagerProtocolProtos.Status.INVALID_REQUEST)
+  .setSuccess(false);
+  return new OMVolumeSetOwnerResponse(null, null, null, null,
+  omResponse.build());
+}
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+String volume = setVolumePropertyRequest.getVolumeName();
+String newOwner = setVolumePropertyRequest.getOwnerName();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildVolumeAuditMap(volume);
+auditMap.put(OzoneConsts.OWNER, newOwner);
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Changing volume ownership failed for user:{} volume:{}",
+  newOwner, volume);
+  omMetrics.incNumVolumeUpdateFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292727143
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -0,0 +1,188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeSetOwnerResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle set owner request for volume.
+ */
+public class OMVolumeSetOwnerRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeSetOwnerRequest.class);
+
+  public OMVolumeSetOwnerRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+SetVolumePropertyRequest setVolumePropertyRequest =
+getOmRequest().getSetVolumePropertyRequest();
+
+Preconditions.checkNotNull(setVolumePropertyRequest);
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.SetVolumeProperty).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+// In production this will never happen, this request will be called only
+// when we have ownerName in setVolumePropertyRequest.
+if (!setVolumePropertyRequest.hasOwnerName()) {
+  omResponse.setStatus(OzoneManagerProtocolProtos.Status.INVALID_REQUEST)
+  .setSuccess(false);
+  return new OMVolumeSetOwnerResponse(null, null, null, null,
+  omResponse.build());
+}
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+String volume = setVolumePropertyRequest.getVolumeName();
+String newOwner = setVolumePropertyRequest.getOwnerName();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildVolumeAuditMap(volume);
+auditMap.put(OzoneConsts.OWNER, newOwner);
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Changing volume ownership failed for user:{} volume:{}",
+  newOwner, volume);
+  omMetrics.incNumVolumeUpdateFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292726928
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -0,0 +1,188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeSetOwnerResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle set owner request for volume.
+ */
+public class OMVolumeSetOwnerRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeSetOwnerRequest.class);
+
+  public OMVolumeSetOwnerRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+SetVolumePropertyRequest setVolumePropertyRequest =
+getOmRequest().getSetVolumePropertyRequest();
+
+Preconditions.checkNotNull(setVolumePropertyRequest);
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.SetVolumeProperty).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+// In production this will never happen, this request will be called only
+// when we have ownerName in setVolumePropertyRequest.
+if (!setVolumePropertyRequest.hasOwnerName()) {
+  omResponse.setStatus(OzoneManagerProtocolProtos.Status.INVALID_REQUEST)
+  .setSuccess(false);
+  return new OMVolumeSetOwnerResponse(null, null, null, null,
+  omResponse.build());
+}
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+String volume = setVolumePropertyRequest.getVolumeName();
+String newOwner = setVolumePropertyRequest.getOwnerName();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildVolumeAuditMap(volume);
+auditMap.put(OzoneConsts.OWNER, newOwner);
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Changing volume ownership failed for user:{} volume:{}",
+  newOwner, volume);
+  omMetrics.incNumVolumeUpdateFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292726928
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -0,0 +1,188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeSetOwnerResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle set owner request for volume.
+ */
+public class OMVolumeSetOwnerRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeSetOwnerRequest.class);
+
+  public OMVolumeSetOwnerRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+SetVolumePropertyRequest setVolumePropertyRequest =
+getOmRequest().getSetVolumePropertyRequest();
+
+Preconditions.checkNotNull(setVolumePropertyRequest);
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.SetVolumeProperty).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+// In production this will never happen, this request will be called only
+// when we have ownerName in setVolumePropertyRequest.
+if (!setVolumePropertyRequest.hasOwnerName()) {
+  omResponse.setStatus(OzoneManagerProtocolProtos.Status.INVALID_REQUEST)
+  .setSuccess(false);
+  return new OMVolumeSetOwnerResponse(null, null, null, null,
+  omResponse.build());
+}
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+String volume = setVolumePropertyRequest.getVolumeName();
+String newOwner = setVolumePropertyRequest.getOwnerName();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildVolumeAuditMap(volume);
+auditMap.put(OzoneConsts.OWNER, newOwner);
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Changing volume ownership failed for user:{} volume:{}",
+  newOwner, volume);
+  omMetrics.incNumVolumeUpdateFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] hadoop-yetus commented on issue #927: HDDS-1543. Implement addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #927: HDDS-1543. Implement 
addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…
URL: https://github.com/apache/hadoop/pull/927#issuecomment-501095609
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for branch |
   | +1 | mvninstall | 506 | trunk passed |
   | +1 | compile | 304 | trunk passed |
   | +1 | checkstyle | 69 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 801 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | trunk passed |
   | 0 | spotbugs | 335 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 530 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 448 | the patch passed |
   | +1 | compile | 284 | the patch passed |
   | +1 | cc | 284 | the patch passed |
   | +1 | javac | 284 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 634 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | -1 | findbugs | 376 | hadoop-ozone generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 160 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1739 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6785 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Redundant nullcheck of list, which is known to be non-null in 
org.apache.hadoop.ozone.om.PrefixManagerImpl.removeAcl(OzoneObj, OzoneAcl)  
Redundant null check at PrefixManagerImpl.java:is known to be non-null in 
org.apache.hadoop.ozone.om.PrefixManagerImpl.removeAcl(OzoneObj, OzoneAcl)  
Redundant null check at PrefixManagerImpl.java:[line 180] |
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/927 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 5195a3f2a5be 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4ea6c2f |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/4/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/4/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/4/testReport/ |
   | Max. process+thread count | 3984 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the 

[jira] [Commented] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-11 Thread Greg Senia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861684#comment-16861684
 ] 

Greg Senia commented on HADOOP-16350:
-

[~jojochuang] right now we are protected against that as we are required by our 
security team to block KMS traffic from leaving our cluster networks. Aka the 
local cluster initiating the distcp has no access to the Remote KMS Server so 
no way to get that delegation token even if we tried as the service is blocked. 
My original thought was to try to determine if a folder was even using 
TDE/Encryption and not even attempt to get a delegation token from either the 
local KMS or the remote KMS in that case which seems to be more ideal longer 
term. This patch for now does solve our problem to allow us to move forward 
with an upgrade but it would be better to solve this long term totally agree.

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Assignee: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292722424
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -0,0 +1,188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeSetOwnerResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle set owner request for volume.
+ */
+public class OMVolumeSetOwnerRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeSetOwnerRequest.class);
+
+  public OMVolumeSetOwnerRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+SetVolumePropertyRequest setVolumePropertyRequest =
+getOmRequest().getSetVolumePropertyRequest();
+
+Preconditions.checkNotNull(setVolumePropertyRequest);
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.SetVolumeProperty).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+// In production this will never happen, this request will be called only
+// when we have ownerName in setVolumePropertyRequest.
+if (!setVolumePropertyRequest.hasOwnerName()) {
+  omResponse.setStatus(OzoneManagerProtocolProtos.Status.INVALID_REQUEST)
+  .setSuccess(false);
+  return new OMVolumeSetOwnerResponse(null, null, null, null,
+  omResponse.build());
+}
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+String volume = setVolumePropertyRequest.getVolumeName();
+String newOwner = setVolumePropertyRequest.getOwnerName();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildVolumeAuditMap(volume);
+auditMap.put(OzoneConsts.OWNER, newOwner);
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Changing volume ownership failed for user:{} volume:{}",
+  newOwner, volume);
+  omMetrics.incNumVolumeUpdateFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292723245
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,195 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.CreateVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+try {
+  omVolumeArgs = getVolumeInfo(omMetadataManager, volume);
 
 Review comment:
   So that this volume does not get modified during ownerName read.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292721717
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
 ##
 @@ -181,17 +181,6 @@ protected OMResponse 
createErrorOMResponse(OMResponse.Builder omResponse,
 return omResponse.build();
   }
 
-
-  /*
-   * This method sets the omRequest. This will method will be called when
-   * preExecute modifies the original request.
-   * @param updatedOMRequest
-   */
-  protected void setUpdatedOmRequest(OMRequest updatedOMRequest) {
-Preconditions.checkNotNull(updatedOMRequest);
-this.omRequest = updatedOMRequest;
-  }
 
 Review comment:
   Yes that is the reason i removed this method. this is done based on the 
above comment execute one. Will revisit during merge HA/Non-HA code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] chenjunjiedada commented on a change in pull request #942: HDDS-1587. Support dynamically adding delegated classes from to isolated class loader.

2019-06-11 Thread GitBox
chenjunjiedada commented on a change in pull request #942: HDDS-1587. Support 
dynamically adding delegated classes from to isolated class loader.
URL: https://github.com/apache/hadoop/pull/942#discussion_r292722838
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/FilteredClassLoader.java
 ##
 @@ -57,6 +57,12 @@ public FilteredClassLoader(URL[] urls, ClassLoader parent) {
 
delegatedClasses.add("org.apache.hadoop.fs.ozone.OzoneFSStorageStatistics");
 delegatedClasses.add("org.apache.hadoop.fs.ozone.Statistic");
 delegatedClasses.add("org.apache.hadoop.fs.Seekable");
+String[] dynamicDelegatedClasses =
+System.getProperty("HADOOP_OZONE_DELEGATED_CLASSES").split(";");
+for (String delegatedClass : dynamicDelegatedClasses) {
+  delegatedClasses.add(delegatedClass);
+}
 
 Review comment:
   getTrimmedStringCollection is member function of class Configuration. We 
can't directly use here. I can add trim() if we care about the space. Is that 
ok?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-11 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861674#comment-16861674
 ] 

Wei-Chiu Chuang edited comment on HADOOP-16350 at 6/12/19 2:04 AM:
---

Hi [~gss2002] I think I understand the problem statement, but I don't really 
feel this is the proper solution.

In fact, even prior to HADOOP-14104, it was still possible to distcp out of 
encryption zone (in fact, this is [a feature supported by Cloudera 
BDR|https://www.cloudera.com/documentation/enterprise/5-8-x/topics/cm_bdr_replication_and_encryption.html#xd_583c10bfdbd326ba-5676e95c-13ed333c3d9--7ff3])
 by acquiring kms delegation token from the remote cluster manually.

But this is an interesting problem and I've never thought about this use case. 
Don't have a good answer now.


was (Author: jojochuang):
Hi [~gss2002] I think I understand the problem statement, but I don't really 
feel this is the proper solution.

In fact, even prior to HADOOP-14104, it was still possible to distcp out of 
encryption zone (in fact, this is a feature supported by Cloudera BDR) by 
acquiring kms delegation token from the remote cluster manually.

But this is an interesting problem and I've never thought about this use case. 
Don't have a good answer now.

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Assignee: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292722424
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -0,0 +1,188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeSetOwnerResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle set owner request for volume.
+ */
+public class OMVolumeSetOwnerRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeSetOwnerRequest.class);
+
+  public OMVolumeSetOwnerRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+SetVolumePropertyRequest setVolumePropertyRequest =
+getOmRequest().getSetVolumePropertyRequest();
+
+Preconditions.checkNotNull(setVolumePropertyRequest);
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.SetVolumeProperty).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+// In production this will never happen, this request will be called only
+// when we have ownerName in setVolumePropertyRequest.
+if (!setVolumePropertyRequest.hasOwnerName()) {
+  omResponse.setStatus(OzoneManagerProtocolProtos.Status.INVALID_REQUEST)
+  .setSuccess(false);
+  return new OMVolumeSetOwnerResponse(null, null, null, null,
+  omResponse.build());
+}
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+String volume = setVolumePropertyRequest.getVolumeName();
+String newOwner = setVolumePropertyRequest.getOwnerName();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildVolumeAuditMap(volume);
+auditMap.put(OzoneConsts.OWNER, newOwner);
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Changing volume ownership failed for user:{} volume:{}",
+  newOwner, volume);
+  omMetrics.incNumVolumeUpdateFails();
+  auditLog(auditLogger, 

[jira] [Commented] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-11 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861674#comment-16861674
 ] 

Wei-Chiu Chuang commented on HADOOP-16350:
--

Hi [~gss2002] I think I understand the problem statement, but I don't really 
feel this is the proper solution.

In fact, even prior to HADOOP-14104, it was still possible to distcp out of 
encryption zone (in fact, this is a feature supported by Cloudera BDR) by 
acquiring kms delegation token from the remote cluster manually.

But this is an interesting problem and I've never thought about this use case. 
Don't have a good answer now.

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Assignee: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292721758
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -97,8 +97,9 @@ public OMRequest preExecute(OzoneManager ozoneManager) 
throws IOException {
 }
 
 newCreateBucketRequest.setBucketInfo(newBucketInfo.build());
+
 return getOmRequest().toBuilder().setUserInfo(getUserInfo())
-.setCreateBucketRequest(newCreateBucketRequest.build()).build();
+   .setCreateBucketRequest(newCreateBucketRequest.build()).build();
   }
 
 Review comment:
   Yes. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292721717
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
 ##
 @@ -181,17 +181,6 @@ protected OMResponse 
createErrorOMResponse(OMResponse.Builder omResponse,
 return omResponse.build();
   }
 
-
-  /*
-   * This method sets the omRequest. This will method will be called when
-   * preExecute modifies the original request.
-   * @param updatedOMRequest
-   */
-  protected void setUpdatedOmRequest(OMRequest updatedOMRequest) {
-Preconditions.checkNotNull(updatedOMRequest);
-this.omRequest = updatedOMRequest;
-  }
 
 Review comment:
   Yes that is the reason i removed this method. this is done based on the 
above comment execute. Will revisit during merge HA/Non-HA code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292721717
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
 ##
 @@ -181,17 +181,6 @@ protected OMResponse 
createErrorOMResponse(OMResponse.Builder omResponse,
 return omResponse.build();
   }
 
-
-  /*
-   * This method sets the omRequest. This will method will be called when
-   * preExecute modifies the original request.
-   * @param updatedOMRequest
-   */
-  protected void setUpdatedOmRequest(OMRequest updatedOMRequest) {
-Preconditions.checkNotNull(updatedOMRequest);
-this.omRequest = updatedOMRequest;
-  }
 
 Review comment:
   Yes that is the reason i reemoved this method. this is done based on the 
above comment execute. Will revisit during merge HA/Non-HA code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] chenjunjiedada commented on a change in pull request #942: HDDS-1587. Support dynamically adding delegated classes from to isolated class loader.

2019-06-11 Thread GitBox
chenjunjiedada commented on a change in pull request #942: HDDS-1587. Support 
dynamically adding delegated classes from to isolated class loader.
URL: https://github.com/apache/hadoop/pull/942#discussion_r292721562
 
 

 ##
 File path: hadoop-ozone/common/src/main/bin/ozone-config.sh
 ##
 @@ -49,3 +49,8 @@ else
   exit 1
 fi
 
+# HADOOP_OZONE_DELEGATED_CLASSES defines a list of classes which will be 
loaded by default
+# class loader of application instead of isolated class loader. With this way 
we can solve
+# incompatible problem when using hadoop3.x + ozone with older hadoop version.
+#export HADOOP_OZONE_DELEGATED_CLASSES=
 
 Review comment:
   I see.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hanishakoneru commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292719627
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
 ##
 @@ -181,17 +181,6 @@ protected OMResponse 
createErrorOMResponse(OMResponse.Builder omResponse,
 return omResponse.build();
   }
 
-
-  /*
-   * This method sets the omRequest. This will method will be called when
-   * preExecute modifies the original request.
-   * @param updatedOMRequest
-   */
-  protected void setUpdatedOmRequest(OMRequest updatedOMRequest) {
-Preconditions.checkNotNull(updatedOMRequest);
-this.omRequest = updatedOMRequest;
-  }
 
 Review comment:
   We are not updating omRequest in OmClientRequest anymore?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hanishakoneru commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292720955
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -97,8 +97,9 @@ public OMRequest preExecute(OzoneManager ozoneManager) 
throws IOException {
 }
 
 newCreateBucketRequest.setBucketInfo(newBucketInfo.build());
+
 return getOmRequest().toBuilder().setUserInfo(getUserInfo())
-.setCreateBucketRequest(newCreateBucketRequest.build()).build();
+   .setCreateBucketRequest(newCreateBucketRequest.build()).build();
   }
 
 Review comment:
   We are not setting the new omRequest in OMClientRequest anymore?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #942: HDDS-1587. Support dynamically adding delegated classes from to isolated class loader.

2019-06-11 Thread GitBox
xiaoyuyao commented on a change in pull request #942: HDDS-1587. Support 
dynamically adding delegated classes from to isolated class loader.
URL: https://github.com/apache/hadoop/pull/942#discussion_r292721104
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/FilteredClassLoader.java
 ##
 @@ -57,6 +57,12 @@ public FilteredClassLoader(URL[] urls, ClassLoader parent) {
 
delegatedClasses.add("org.apache.hadoop.fs.ozone.OzoneFSStorageStatistics");
 delegatedClasses.add("org.apache.hadoop.fs.ozone.Statistic");
 delegatedClasses.add("org.apache.hadoop.fs.Seekable");
+String[] dynamicDelegatedClasses =
+System.getProperty("HADOOP_OZONE_DELEGATED_CLASSES").split(";");
+for (String delegatedClass : dynamicDelegatedClasses) {
+  delegatedClasses.add(delegatedClass);
+}
 
 Review comment:
   You can replace the code L60-64 with 
   delegatedClasses.addAll(
   
getTrimmedStringCollection(System.getenv("HADOOP_OZONE_DELEGATED_CLASSES")));
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #942: HDDS-1587. Support dynamically adding delegated classes from to isolated class loader.

2019-06-11 Thread GitBox
xiaoyuyao commented on a change in pull request #942: HDDS-1587. Support 
dynamically adding delegated classes from to isolated class loader.
URL: https://github.com/apache/hadoop/pull/942#discussion_r292720912
 
 

 ##
 File path: hadoop-ozone/common/src/main/bin/ozone-config.sh
 ##
 @@ -49,3 +49,8 @@ else
   exit 1
 fi
 
+# HADOOP_OZONE_DELEGATED_CLASSES defines a list of classes which will be 
loaded by default
+# class loader of application instead of isolated class loader. With this way 
we can solve
+# incompatible problem when using hadoop3.x + ozone with older hadoop version.
+#export HADOOP_OZONE_DELEGATED_CLASSES=
 
 Review comment:
   This may not work because we expect to expose an environment variable but 
the code in FilteredClassLoader only load a system property. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #942: HDDS-1587. Support dynamically adding delegated classes from to isolated class loader.

2019-06-11 Thread GitBox
xiaoyuyao commented on a change in pull request #942: HDDS-1587. Support 
dynamically adding delegated classes from to isolated class loader.
URL: https://github.com/apache/hadoop/pull/942#discussion_r292720912
 
 

 ##
 File path: hadoop-ozone/common/src/main/bin/ozone-config.sh
 ##
 @@ -49,3 +49,8 @@ else
   exit 1
 fi
 
+# HADOOP_OZONE_DELEGATED_CLASSES defines a list of classes which will be 
loaded by default
+# class loader of application instead of isolated class loader. With this way 
we can solve
+# incompatible problem when using hadoop3.x + ozone with older hadoop version.
+#export HADOOP_OZONE_DELEGATED_CLASSES=
 
 Review comment:
   Here we expose a environment variable but the code in FilteredClassLoader 
only load a system property. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-11 Thread Greg Senia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861661#comment-16861661
 ] 

Greg Senia edited comment on HADOOP-16350 at 6/12/19 1:46 AM:
--

[~szetszwo], No problem it solves the interim problem for us.  The real problem 
is we use KMSClientProvider locally in the cluster so what we are trying to 
solve for is to allow for the code to operate is it did before HADOOP-14104. In 
our case we want to just disable call that is made we have verified that the 
patch I provided when set to false reverts the code to operate is it did before 
HADOOP-14104. Please provide more details on how you plan to go about making it 
operate as it did before HADOOP-14104 as I am willing to modify the patch but 
we need the same functionality that existed before HADOOP-14104. Not trying to 
be a pain but our customers and security team basically relied on this feature 
to prevent folders that contained TDE/Encrypted data from being moved from 
cluster to cluster.


was (Author: gss2002):
[~szetszwo] the problem is we use KMSClientProvider locally in the cluster what 
we are trying to solve for is to allow for the code to operate is it did before 
HADOOP-14104. In our case we want to just disable call that is made we have 
verified that the patch I provided when set to false reverts the code to 
operate is it did before HADOOP-14104. Please provide more details on how you 
plan to go about making it operate as it did before HADOOP-14104 as I am 
willing to modify the patch but we need the same functionality that existed 
before HADOOP-14104.

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Assignee: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting 

[jira] [Commented] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-11 Thread Greg Senia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861661#comment-16861661
 ] 

Greg Senia commented on HADOOP-16350:
-

[~szetszwo] the problem is we use KMSClientProvider locally in the cluster what 
we are trying to solve for is to allow for the code to operate is it did before 
HADOOP-14104. In our case we want to just disable call that is made we have 
verified that the patch I provided when set to false reverts the code to 
operate is it did before HADOOP-14104. Please provide more details on how you 
plan to go about making it operate as it did before HADOOP-14104 as I am 
willing to modify the patch but we need the same functionality that existed 
before HADOOP-14104.

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Assignee: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292719948
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
 ##
 @@ -43,10 +48,27 @@ private OzoneManagerRatisUtils() {
* @return OMClientRequest
* @throws IOException
*/
-  public static OMClientRequest createClientRequest(OMRequest omRequest)
-  throws IOException {
+  public static OMClientRequest createClientRequest(OMRequest omRequest) {
 Type cmdType = omRequest.getCmdType();
 switch (cmdType) {
+case CreateVolume:
+  return new OMVolumeCreateRequest(omRequest);
+case SetVolumeProperty:
+  boolean hasQuota = omRequest.getSetVolumePropertyRequest()
+  .hasQuotaInBytes();
+  boolean hasOwner = 
omRequest.getSetVolumePropertyRequest().hasOwnerName();
+  Preconditions.checkState(hasOwner || hasQuota , "Either Quota or owner " 
+
+  "shuould be set in the SetVolumeProperty request");
+  Preconditions.checkState(!(hasOwner && hasQuota) , "Either Quota or " +
+  "owner shuould be set in the SetVolumeProperty request. Should not " 
+
+  "be set both");
+  if (omRequest.getSetVolumePropertyRequest().hasQuotaInBytes()) {
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292719938
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
 ##
 @@ -43,10 +48,27 @@ private OzoneManagerRatisUtils() {
* @return OMClientRequest
* @throws IOException
*/
-  public static OMClientRequest createClientRequest(OMRequest omRequest)
-  throws IOException {
+  public static OMClientRequest createClientRequest(OMRequest omRequest) {
 Type cmdType = omRequest.getCmdType();
 switch (cmdType) {
+case CreateVolume:
+  return new OMVolumeCreateRequest(omRequest);
+case SetVolumeProperty:
+  boolean hasQuota = omRequest.getSetVolumePropertyRequest()
+  .hasQuotaInBytes();
+  boolean hasOwner = 
omRequest.getSetVolumePropertyRequest().hasOwnerName();
+  Preconditions.checkState(hasOwner || hasQuota , "Either Quota or owner " 
+
+  "shuould be set in the SetVolumeProperty request");
+  Preconditions.checkState(!(hasOwner && hasQuota) , "Either Quota or " +
+  "owner shuould be set in the SetVolumeProperty request. Should not " 
+
+  "be set both");
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292719568
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -0,0 +1,188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeSetOwnerResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle set owner request for volume.
+ */
+public class OMVolumeSetOwnerRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeSetOwnerRequest.class);
+
+  public OMVolumeSetOwnerRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+SetVolumePropertyRequest setVolumePropertyRequest =
+getOmRequest().getSetVolumePropertyRequest();
+
+Preconditions.checkNotNull(setVolumePropertyRequest);
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.SetVolumeProperty).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+// In production this will never happen, this request will be called only
+// when we have ownerName in setVolumePropertyRequest.
+if (!setVolumePropertyRequest.hasOwnerName()) {
+  omResponse.setStatus(OzoneManagerProtocolProtos.Status.INVALID_REQUEST)
+  .setSuccess(false);
+  return new OMVolumeSetOwnerResponse(null, null, null, null,
+  omResponse.build());
+}
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+String volume = setVolumePropertyRequest.getVolumeName();
+String newOwner = setVolumePropertyRequest.getOwnerName();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildVolumeAuditMap(volume);
+auditMap.put(OzoneConsts.OWNER, newOwner);
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Changing volume ownership failed for user:{} volume:{}",
+  newOwner, volume);
+  omMetrics.incNumVolumeUpdateFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292719568
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -0,0 +1,188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeSetOwnerResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle set owner request for volume.
+ */
+public class OMVolumeSetOwnerRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeSetOwnerRequest.class);
+
+  public OMVolumeSetOwnerRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+SetVolumePropertyRequest setVolumePropertyRequest =
+getOmRequest().getSetVolumePropertyRequest();
+
+Preconditions.checkNotNull(setVolumePropertyRequest);
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.SetVolumeProperty).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+// In production this will never happen, this request will be called only
+// when we have ownerName in setVolumePropertyRequest.
+if (!setVolumePropertyRequest.hasOwnerName()) {
+  omResponse.setStatus(OzoneManagerProtocolProtos.Status.INVALID_REQUEST)
+  .setSuccess(false);
+  return new OMVolumeSetOwnerResponse(null, null, null, null,
+  omResponse.build());
+}
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+String volume = setVolumePropertyRequest.getVolumeName();
+String newOwner = setVolumePropertyRequest.getOwnerName();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildVolumeAuditMap(volume);
+auditMap.put(OzoneConsts.OWNER, newOwner);
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Changing volume ownership failed for user:{} volume:{}",
+  newOwner, volume);
+  omMetrics.incNumVolumeUpdateFails();
+  auditLog(auditLogger, 

[jira] [Assigned] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-11 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze reassigned HADOOP-16350:


Assignee: Greg Senia

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Assignee: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 556079 for gss2002 on ha-hdfs:tech
> 19/05/29 14:06:10 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: java.net.NoRouteToHostException: No route to host (Host 
> unreachable)
> at 
> 

[GitHub] [hadoop] bharatviswa504 commented on issue #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on issue #884: HDDS-1620. Implement Volume Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#issuecomment-501086712
 
 
   But then why do we need to get the first volume lock? It is a read operation 
only.
   
   So that this does not get modified during ownerName read. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16350) Ability to tell Hadoop not to request KMS Information from Remote NN

2019-06-11 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861658#comment-16861658
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-16350:
--

[~gss2002], thanks for filing the JIRA and providing a patch!

HADOOP-14104 has changed client to always ask namenode for kms provider path.  
Instead of add a new conf, we should further change client to allow overriding 
the server conf.  I.e. client also reads the kms provider path from conf.  If 
the conf is set to empty string, don't use KMSClientProvider.

> Ability to tell Hadoop not to request KMS Information from Remote NN 
> -
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at 

[GitHub] [hadoop] hanishakoneru commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hanishakoneru commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292718676
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -0,0 +1,188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeSetOwnerResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle set owner request for volume.
+ */
+public class OMVolumeSetOwnerRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeSetOwnerRequest.class);
+
+  public OMVolumeSetOwnerRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+SetVolumePropertyRequest setVolumePropertyRequest =
+getOmRequest().getSetVolumePropertyRequest();
+
+Preconditions.checkNotNull(setVolumePropertyRequest);
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.SetVolumeProperty).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+// In production this will never happen, this request will be called only
+// when we have ownerName in setVolumePropertyRequest.
+if (!setVolumePropertyRequest.hasOwnerName()) {
+  omResponse.setStatus(OzoneManagerProtocolProtos.Status.INVALID_REQUEST)
+  .setSuccess(false);
+  return new OMVolumeSetOwnerResponse(null, null, null, null,
+  omResponse.build());
+}
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+String volume = setVolumePropertyRequest.getVolumeName();
+String newOwner = setVolumePropertyRequest.getOwnerName();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildVolumeAuditMap(volume);
+auditMap.put(OzoneConsts.OWNER, newOwner);
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Changing volume ownership failed for user:{} volume:{}",
+  newOwner, volume);
+  omMetrics.incNumVolumeUpdateFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] hanishakoneru commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hanishakoneru commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292717237
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
 ##
 @@ -43,10 +48,27 @@ private OzoneManagerRatisUtils() {
* @return OMClientRequest
* @throws IOException
*/
-  public static OMClientRequest createClientRequest(OMRequest omRequest)
-  throws IOException {
+  public static OMClientRequest createClientRequest(OMRequest omRequest) {
 Type cmdType = omRequest.getCmdType();
 switch (cmdType) {
+case CreateVolume:
+  return new OMVolumeCreateRequest(omRequest);
+case SetVolumeProperty:
+  boolean hasQuota = omRequest.getSetVolumePropertyRequest()
+  .hasQuotaInBytes();
+  boolean hasOwner = 
omRequest.getSetVolumePropertyRequest().hasOwnerName();
+  Preconditions.checkState(hasOwner || hasQuota , "Either Quota or owner " 
+
+  "shuould be set in the SetVolumeProperty request");
+  Preconditions.checkState(!(hasOwner && hasQuota) , "Either Quota or " +
+  "owner shuould be set in the SetVolumeProperty request. Should not " 
+
+  "be set both");
+  if (omRequest.getSetVolumePropertyRequest().hasQuotaInBytes()) {
 
 Review comment:
   NIT: we can use hasQuota variable here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hanishakoneru commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292718137
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,195 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.CreateVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+try {
+  omVolumeArgs = getVolumeInfo(omMetadataManager, volume);
 
 Review comment:
   But then why do we need to get the first volume lock? It is a read operation 
only. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

[GitHub] [hadoop] hanishakoneru commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hanishakoneru commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292717060
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
 ##
 @@ -43,10 +48,27 @@ private OzoneManagerRatisUtils() {
* @return OMClientRequest
* @throws IOException
*/
-  public static OMClientRequest createClientRequest(OMRequest omRequest)
-  throws IOException {
+  public static OMClientRequest createClientRequest(OMRequest omRequest) {
 Type cmdType = omRequest.getCmdType();
 switch (cmdType) {
+case CreateVolume:
+  return new OMVolumeCreateRequest(omRequest);
+case SetVolumeProperty:
+  boolean hasQuota = omRequest.getSetVolumePropertyRequest()
+  .hasQuotaInBytes();
+  boolean hasOwner = 
omRequest.getSetVolumePropertyRequest().hasOwnerName();
+  Preconditions.checkState(hasOwner || hasQuota , "Either Quota or owner " 
+
+  "shuould be set in the SetVolumeProperty request");
+  Preconditions.checkState(!(hasOwner && hasQuota) , "Either Quota or " +
+  "owner shuould be set in the SetVolumeProperty request. Should not " 
+
+  "be set both");
 
 Review comment:
   NIT: 1. typo in should
   2.typo: not set both


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16365) Upgrade jackson-databind to 2.9.9

2019-06-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861653#comment-16861653
 ] 

Hadoop QA commented on HADOOP-16365:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16365 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12971514/HADOOP-16365.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux fef1210399e1 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon Dec 
10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4ea6c2f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16316/testReport/ |
| Max. process+thread count | 312 (vs. ulimit of 1) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16316/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Upgrade jackson-databind to 2.9.9
> -
>
> Key: HADOOP-16365
> URL: https://issues.apache.org/jira/browse/HADOOP-16365
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HADOOP-16365.001.patch
>
>
> Add 2.9.9 version of Jackson-databind 



--
This message was sent 

[GitHub] [hadoop] hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#issuecomment-501078757
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for branch |
   | +1 | mvninstall | 488 | trunk passed |
   | +1 | compile | 278 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 1 | trunk passed |
   | +1 | shadedclient | 891 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 324 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 507 | trunk passed |
   | -0 | patch | 379 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 447 | the patch passed |
   | +1 | compile | 281 | the patch passed |
   | +1 | cc | 281 | the patch passed |
   | -1 | javac | 187 | hadoop-ozone generated 4 new + 3 unchanged - 0 fixed = 
7 total (was 3) |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 739 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 222 | the patch passed |
   | +1 | findbugs | 626 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 222 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1857 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 7261 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/884 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux 06670fd8251a 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4ea6c2f |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/16/artifact/out/diff-compile-javac-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/16/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/16/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/16/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/16/testReport/ |
   | Max. process+thread count | 4296 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/16/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure 

[GitHub] [hadoop] hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#issuecomment-501075842
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 67 | Maven dependency ordering for branch |
   | +1 | mvninstall | 523 | trunk passed |
   | +1 | compile | 301 | trunk passed |
   | +1 | checkstyle | 91 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 904 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 479 | trunk passed |
   | 0 | spotbugs | 777 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 1253 | trunk passed |
   | -0 | patch | 961 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 141 | Maven dependency ordering for patch |
   | +1 | mvninstall | 590 | the patch passed |
   | +1 | compile | 301 | the patch passed |
   | +1 | cc | 301 | the patch passed |
   | -1 | javac | 198 | hadoop-ozone generated 4 new + 3 unchanged - 0 fixed = 
7 total (was 3) |
   | -0 | checkstyle | 49 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 704 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | +1 | findbugs | 533 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 147 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1082 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 7825 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/884 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux 3072a62bab7c 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4fecc2a |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/15/artifact/out/diff-compile-javac-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/15/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/15/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/15/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/15/testReport/ |
   | Max. process+thread count | 4635 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/15/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HADOOP-16365) Upgrade jackson-databind to 2.9.9

2019-06-11 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861634#comment-16861634
 ] 

Wei-Chiu Chuang commented on HADOOP-16365:
--

instead of adding another  tag, you should  update the 
existing  tag

> Upgrade jackson-databind to 2.9.9
> -
>
> Key: HADOOP-16365
> URL: https://issues.apache.org/jira/browse/HADOOP-16365
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HADOOP-16365.001.patch
>
>
> Add 2.9.9 version of Jackson-databind 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16365) Upgrade jackson-databind to 2.9.9

2019-06-11 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16365:
-
Status: Patch Available  (was: Open)

> Upgrade jackson-databind to 2.9.9
> -
>
> Key: HADOOP-16365
> URL: https://issues.apache.org/jira/browse/HADOOP-16365
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HADOOP-16365.001.patch
>
>
> Add 2.9.9 version of Jackson-databind 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #927: HDDS-1543. Implement addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…

2019-06-11 Thread GitBox
xiaoyuyao commented on a change in pull request #927: HDDS-1543. Implement 
addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…
URL: https://github.com/apache/hadoop/pull/927#discussion_r292707539
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2200,6 +2201,66 @@ public void testNativeAclsForKey() throws Exception {
 validateOzoneAcl(ozObj);
   }
 
+  @Test
+  public void testNativeAclsForPrefix() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+String prefix1 = "PF" + UUID.randomUUID().toString() + "/";
+String key1 = prefix1 + "KEY" + UUID.randomUUID().toString();
+
+String prefix2 = "PF" + UUID.randomUUID().toString() + "/";
+String key2 = prefix2 + "KEY" + UUID.randomUUID().toString();
+
+store.createVolume(volumeName);
+OzoneVolume volume = store.getVolume(volumeName);
+volume.createBucket(bucketName);
+OzoneBucket bucket = volume.getBucket(bucketName);
+assertNotNull("Bucket creation failed", bucket);
+
+writeKey(key1, bucket);
+writeKey(key2, bucket);
+
+OzoneObj ozObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setBucketName(bucketName)
+.setPrefixName(prefix1)
+.setResType(OzoneObj.ResourceType.PREFIX)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+
+// add acl
+BitSet aclRights1 = new BitSet();
+aclRights1.set(ACLType.READ.ordinal());
+OzoneAcl user1Acl = new OzoneAcl(ACLIdentityType.USER,
+"user1", aclRights1);
+assertTrue(store.addAcl(ozObj, user1Acl));
+
+// get acl
+List aclsGet = store.getAcl(ozObj);
+Assert.assertEquals(1, aclsGet.size());
+Assert.assertEquals(user1Acl, aclsGet.get(0));
+
+// remove acl
+Assert.assertTrue(store.removeAcl(ozObj, user1Acl));
+aclsGet = store.getAcl(ozObj);
+Assert.assertEquals(0, aclsGet.size());
+
+// set acl
+BitSet aclRights2 = new BitSet();
+aclRights2.set(ACLType.ALL.ordinal());
+OzoneAcl group1Acl = new OzoneAcl(ACLIdentityType.GROUP,
+"group1", aclRights2);
+List acls = new ArrayList<>();
+acls.add(user1Acl);
+acls.add(group1Acl);
+Assert.assertTrue(store.setAcl(ozObj, acls));
+
+// get acl
+aclsGet = store.getAcl(ozObj);
+Assert.assertEquals(2, aclsGet.size());
 
 Review comment:
   Took further look and found the assumption of volume/bucket/key acl are 
different from prefix acl, e.g., no default acl for prefix. As a result, I've 
covered all cases in the separate tests. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #927: HDDS-1543. Implement addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…

2019-06-11 Thread GitBox
xiaoyuyao commented on a change in pull request #927: HDDS-1543. Implement 
addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…
URL: https://github.com/apache/hadoop/pull/927#discussion_r292707539
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2200,6 +2201,66 @@ public void testNativeAclsForKey() throws Exception {
 validateOzoneAcl(ozObj);
   }
 
+  @Test
+  public void testNativeAclsForPrefix() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+String prefix1 = "PF" + UUID.randomUUID().toString() + "/";
+String key1 = prefix1 + "KEY" + UUID.randomUUID().toString();
+
+String prefix2 = "PF" + UUID.randomUUID().toString() + "/";
+String key2 = prefix2 + "KEY" + UUID.randomUUID().toString();
+
+store.createVolume(volumeName);
+OzoneVolume volume = store.getVolume(volumeName);
+volume.createBucket(bucketName);
+OzoneBucket bucket = volume.getBucket(bucketName);
+assertNotNull("Bucket creation failed", bucket);
+
+writeKey(key1, bucket);
+writeKey(key2, bucket);
+
+OzoneObj ozObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setBucketName(bucketName)
+.setPrefixName(prefix1)
+.setResType(OzoneObj.ResourceType.PREFIX)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+
+// add acl
+BitSet aclRights1 = new BitSet();
+aclRights1.set(ACLType.READ.ordinal());
+OzoneAcl user1Acl = new OzoneAcl(ACLIdentityType.USER,
+"user1", aclRights1);
+assertTrue(store.addAcl(ozObj, user1Acl));
+
+// get acl
+List aclsGet = store.getAcl(ozObj);
+Assert.assertEquals(1, aclsGet.size());
+Assert.assertEquals(user1Acl, aclsGet.get(0));
+
+// remove acl
+Assert.assertTrue(store.removeAcl(ozObj, user1Acl));
+aclsGet = store.getAcl(ozObj);
+Assert.assertEquals(0, aclsGet.size());
+
+// set acl
+BitSet aclRights2 = new BitSet();
+aclRights2.set(ACLType.ALL.ordinal());
+OzoneAcl group1Acl = new OzoneAcl(ACLIdentityType.GROUP,
+"group1", aclRights2);
+List acls = new ArrayList<>();
+acls.add(user1Acl);
+acls.add(group1Acl);
+Assert.assertTrue(store.setAcl(ozObj, acls));
+
+// get acl
+aclsGet = store.getAcl(ozObj);
+Assert.assertEquals(2, aclsGet.size());
 
 Review comment:
   validateOzoneAcl() reused at the beginning of the new test cases.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #884: HDDS-1620. Implement Volume Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#issuecomment-501072111
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 12 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 514 | trunk passed |
   | +1 | compile | 318 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 955 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | trunk passed |
   | 0 | spotbugs | 342 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 544 | trunk passed |
   | -0 | patch | 398 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 472 | the patch passed |
   | +1 | compile | 295 | the patch passed |
   | +1 | cc | 295 | the patch passed |
   | -1 | javac | 197 | hadoop-ozone generated 4 new + 3 unchanged - 0 fixed = 
7 total (was 3) |
   | -0 | checkstyle | 44 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 755 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 185 | the patch passed |
   | +1 | findbugs | 611 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 166 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1542 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 7024 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/884 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux 8bec5cbf58e3 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4fecc2a |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/14/artifact/out/diff-compile-javac-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/14/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/14/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/14/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/14/testReport/ |
   | Max. process+thread count | 5415 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-884/14/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To 

[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #927: HDDS-1543. Implement addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…

2019-06-11 Thread GitBox
xiaoyuyao commented on a change in pull request #927: HDDS-1543. Implement 
addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…
URL: https://github.com/apache/hadoop/pull/927#discussion_r292707083
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneObjInfo.java
 ##
 @@ -70,37 +77,55 @@ public String getKeyName() {
 return keyName;
   }
 
+  @Override
+  public String getPrefixName() {
+return keyName;
+  }
 
 Review comment:
   document added.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #843: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #843: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/843#issuecomment-501071250
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 32 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1094 | trunk passed |
   | +1 | compile | 1145 | trunk passed |
   | +1 | checkstyle | 144 | trunk passed |
   | +1 | mvnsite | 120 | trunk passed |
   | +1 | shadedclient | 975 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 82 | trunk passed |
   | 0 | spotbugs | 74 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 201 | trunk passed |
   | -0 | patch | 105 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 82 | the patch passed |
   | +1 | compile | 1009 | the patch passed |
   | +1 | javac | 1009 | the patch passed |
   | -0 | checkstyle | 144 | root: The patch generated 20 new + 100 unchanged - 
2 fixed = 120 total (was 102) |
   | +1 | mvnsite | 126 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 696 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 99 | the patch passed |
   | +1 | findbugs | 223 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 504 | hadoop-common in the patch failed. |
   | +1 | unit | 293 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7086 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.util.TestDiskChecker |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/843 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 86f97d3cf0fe 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4fecc2a |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/9/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/9/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/9/testReport/ |
   | Max. process+thread count | 1348 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16361) TestSecureLogins#testValidKerberosName fails on branch-2

2019-06-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861617#comment-16861617
 ] 

Eric Yang commented on HADOOP-16361:


The root cause is incorrect regex for parsing kerberos principal to trigger 
auth_to_local mapping look up.  In the test case, zookeeper/localhost is not a 
kerberos principal, but branch-2 logic will attempt to apply auth_to_local 
mapping and found no match to cause test case to fail.  The test case exposes 
the implementation issue in Hadoop's approach for parsing Kerberos principal.

According to [~daryn]'s comment in 
[HADOOP-16214|https://issues.apache.org/jira/browse/HADOOP-16214?focusedCommentId=16813851=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16813851]
 stated:

{quote}That's incorrect. It supports interop between secure clients and 
insecure servers. Insecure servers treats principals as principals, else as the 
short name used by insecure clients.{quote}

If the above statement needs to remain true, we need to refine KerberosName 
parsing strategy, and formalize 
(zookeeper/localh...@example.com).getShortName() == 
(zookeeper/localhost).getShortName().

One such implementation is offered in HADOOP-16214 patch 013, but it needs some 
work to match branch 2 implementation.  HADOOP-16214 is not committed, 
therefore, take my advice with cautions.

> TestSecureLogins#testValidKerberosName fails on branch-2
> 
>
> Key: HADOOP-16361
> URL: https://issues.apache.org/jira/browse/HADOOP-16361
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.10.0, 2.9.2, 2.8.5
>Reporter: Jim Brennan
>Priority: Major
>
> This test is failing in branch-2.
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 26.917 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.007 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:401)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:182)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16365) Upgrade jackson-databind to 2.9.9

2019-06-11 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16365:

Attachment: HADOOP-16365.001.patch

> Upgrade jackson-databind to 2.9.9
> -
>
> Key: HADOOP-16365
> URL: https://issues.apache.org/jira/browse/HADOOP-16365
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HADOOP-16365.001.patch
>
>
> Add 2.9.9 version of Jackson-databind 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16365) Upgrade jackson-databind to 2.9.9

2019-06-11 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16365:

Summary: Upgrade jackson-databind to 2.9.9  (was: Upgrade Jackson-databind 
to 2.9.9)

> Upgrade jackson-databind to 2.9.9
> -
>
> Key: HADOOP-16365
> URL: https://issues.apache.org/jira/browse/HADOOP-16365
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
>
> Add 2.9.9 version of Jackson-databind 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16365) Upgrade Jackson-databind to 2.9.9

2019-06-11 Thread Shweta (JIRA)
Shweta created HADOOP-16365:
---

 Summary: Upgrade Jackson-databind to 2.9.9
 Key: HADOOP-16365
 URL: https://issues.apache.org/jira/browse/HADOOP-16365
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Reporter: Shweta
Assignee: Shweta


Add 2.9.9 version of Jackson-databind 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861598#comment-16861598
 ] 

Hudson commented on HADOOP-16354:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16728 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16728/])
HADOOP-16354.  Enable AuthFilter as default for WebHDFS.(eyang: 
rev 4ea6c2f457496461afc63f38ef4cef3ab0efce49)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilterInitializer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authentication/server/TestProxyUserAuthenticationFilter.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authentication/server/ProxyUserAuthenticationFilter.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestAuthFilter.java


> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch, HADOOP-16354-005.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #947: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #947: HDDS-1638. Implement Key Write Requests 
to use Cache and DoubleBuffer
URL: https://github.com/apache/hadoop/pull/947#issuecomment-501053778
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 16 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 507 | trunk passed |
   | +1 | compile | 300 | trunk passed |
   | +1 | checkstyle | 90 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 890 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 344 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 529 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 475 | the patch passed |
   | +1 | compile | 283 | the patch passed |
   | +1 | cc | 283 | the patch passed |
   | -1 | javac | 194 | hadoop-ozone generated 4 new + 3 unchanged - 0 fixed = 
7 total (was 3) |
   | -0 | checkstyle | 44 | hadoop-ozone: The patch generated 6 new + 0 
unchanged - 0 fixed = 6 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 640 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 183 | the patch passed |
   | +1 | findbugs | 543 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 165 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1060 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 6269 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-947/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/947 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux a3f1bf299ce2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 5740eea |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-947/1/artifact/out/diff-compile-javac-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-947/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-947/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-947/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-947/1/testReport/ |
   | Max. process+thread count | 4934 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-947/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #843: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #843: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/843#issuecomment-501053136
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 111 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 32 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1140 | trunk passed |
   | +1 | compile | 970 | trunk passed |
   | +1 | checkstyle | 140 | trunk passed |
   | +1 | mvnsite | 122 | trunk passed |
   | +1 | shadedclient | 1045 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 105 | trunk passed |
   | 0 | spotbugs | 76 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 221 | trunk passed |
   | -0 | patch | 116 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 104 | the patch passed |
   | +1 | compile | 1207 | the patch passed |
   | +1 | javac | 1207 | the patch passed |
   | -0 | checkstyle | 161 | root: The patch generated 31 new + 100 unchanged - 
2 fixed = 131 total (was 102) |
   | +1 | mvnsite | 148 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 803 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 39 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | +1 | findbugs | 235 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 584 | hadoop-common in the patch passed. |
   | +1 | unit | 293 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 7659 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/843 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux e1e471b5943e 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 5740eea |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/8/artifact/out/diff-checkstyle-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/8/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/8/testReport/ |
   | Max. process+thread count | 1348 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16356) Distcp with webhdfs is not working with ProxyUserAuthenticationFilter or AuthenticationFilter

2019-06-11 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-16356.

Resolution: Duplicate

> Distcp with webhdfs is not working with ProxyUserAuthenticationFilter or 
> AuthenticationFilter
> -
>
> Key: HADOOP-16356
> URL: https://issues.apache.org/jira/browse/HADOOP-16356
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When distcp is running with webhdfs://, there is no delegation token issued 
> to mapreduce task because mapreduce task does not have kerberos tgt ticket.
> This stack trace was thrown when mapreduce task contacts webhdfs:
> {code}
> Error: org.apache.hadoop.security.AccessControlException: Authentication 
> required
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:492)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:760)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:835)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:663)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:701)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:697)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1095)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1106)
>   at org.apache.hadoop.tools.mapred.CopyMapper.setup(CopyMapper.java:124)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172)
> {code}
> There are two proposals:
> 1. Have a API to issue delegation token to pass along to webhdfs to maintain 
> backward compatibility.
> 2. Have mapreduce task login to kerberos then perform webhdfs fetching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16356) Distcp with webhdfs is not working with ProxyUserAuthenticationFilter or AuthenticationFilter

2019-06-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861579#comment-16861579
 ] 

Eric Yang commented on HADOOP-16356:


[~jojochuang] We have opt-in to first proposal to maintain backward 
compatibility to obtain delegation token and support issuing delegation token 
through impersonation in HADOOP-16354.  Close this as a duplicate.

> Distcp with webhdfs is not working with ProxyUserAuthenticationFilter or 
> AuthenticationFilter
> -
>
> Key: HADOOP-16356
> URL: https://issues.apache.org/jira/browse/HADOOP-16356
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When distcp is running with webhdfs://, there is no delegation token issued 
> to mapreduce task because mapreduce task does not have kerberos tgt ticket.
> This stack trace was thrown when mapreduce task contacts webhdfs:
> {code}
> Error: org.apache.hadoop.security.AccessControlException: Authentication 
> required
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:492)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:760)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:835)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:663)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:701)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:697)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1095)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1106)
>   at org.apache.hadoop.tools.mapred.CopyMapper.setup(CopyMapper.java:124)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172)
> {code}
> There are two proposals:
> 1. Have a API to issue delegation token to pass along to webhdfs to maintain 
> backward compatibility.
> 2. Have mapreduce task login to kerberos then perform webhdfs fetching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #948: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state

2019-06-11 Thread GitBox
hadoop-yetus commented on issue #948: HDDS-1649. On installSnapshot 
notification from OM leader, download checkpoint and reload OM state
URL: https://github.com/apache/hadoop/pull/948#issuecomment-501051549
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for branch |
   | +1 | mvninstall | 533 | trunk passed |
   | +1 | compile | 285 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1019 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 206 | trunk passed |
   | 0 | spotbugs | 355 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 595 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | -1 | mvninstall | 135 | hadoop-ozone in the patch failed. |
   | -1 | compile | 53 | hadoop-ozone in the patch failed. |
   | -1 | javac | 53 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 38 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 745 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | -1 | findbugs | 109 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 170 | hadoop-hdds in the patch failed. |
   | -1 | unit | 107 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 5014 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/948 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9e99e3e21d68 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 5740eea |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/1/testReport/ |
   | Max. process+thread count | 361 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-948/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[jira] [Updated] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-11 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16354:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thank you [~Prabhu Joseph] for the patch.

I just committed this to trunk.

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch, HADOOP-16354-005.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292687190
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,195 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.CreateVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+try {
+  omVolumeArgs = getVolumeInfo(omMetadataManager, volume);
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeDeleteResponse(null, null, 

[jira] [Commented] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861569#comment-16861569
 ] 

Eric Yang commented on HADOOP-16354:


+1 for patch 005.

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch, HADOOP-16354-005.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292681344
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -0,0 +1,188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeSetOwnerResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle set owner request for volume.
+ */
+public class OMVolumeSetOwnerRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeSetOwnerRequest.class);
+
+  public OMVolumeSetOwnerRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+SetVolumePropertyRequest setVolumePropertyRequest =
+getOmRequest().getSetVolumePropertyRequest();
+
+Preconditions.checkNotNull(setVolumePropertyRequest);
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.SetVolumeProperty).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+// In production this will never happen, this request will be called only
+// when we have ownerName in setVolumePropertyRequest.
+if (!setVolumePropertyRequest.hasOwnerName()) {
+  omResponse.setStatus(OzoneManagerProtocolProtos.Status.INVALID_REQUEST)
+  .setSuccess(false);
+  return new OMVolumeSetOwnerResponse(null, null, null, null,
+  omResponse.build());
+}
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+String volume = setVolumePropertyRequest.getVolumeName();
+String newOwner = setVolumePropertyRequest.getOwnerName();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildVolumeAuditMap(volume);
+auditMap.put(OzoneConsts.OWNER, newOwner);
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Changing volume ownership failed for user:{} volume:{}",
+  newOwner, volume);
+  omMetrics.incNumVolumeUpdateFails();
+  auditLog(auditLogger, 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292681344
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -0,0 +1,188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeSetOwnerResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle set owner request for volume.
+ */
+public class OMVolumeSetOwnerRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeSetOwnerRequest.class);
+
+  public OMVolumeSetOwnerRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+SetVolumePropertyRequest setVolumePropertyRequest =
+getOmRequest().getSetVolumePropertyRequest();
+
+Preconditions.checkNotNull(setVolumePropertyRequest);
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.SetVolumeProperty).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+// In production this will never happen, this request will be called only
+// when we have ownerName in setVolumePropertyRequest.
+if (!setVolumePropertyRequest.hasOwnerName()) {
+  omResponse.setStatus(OzoneManagerProtocolProtos.Status.INVALID_REQUEST)
+  .setSuccess(false);
+  return new OMVolumeSetOwnerResponse(null, null, null, null,
+  omResponse.build());
+}
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+String volume = setVolumePropertyRequest.getVolumeName();
+String newOwner = setVolumePropertyRequest.getOwnerName();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildVolumeAuditMap(volume);
+auditMap.put(OzoneConsts.OWNER, newOwner);
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Changing volume ownership failed for user:{} volume:{}",
+  newOwner, volume);
+  omMetrics.incNumVolumeUpdateFails();
+  auditLog(auditLogger, 

[jira] [Commented] (HADOOP-16263) Update BUILDING.txt with macOS native build instructions

2019-06-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861555#comment-16861555
 ] 

Hudson commented on HADOOP-16263:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16727 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16727/])
HADOOP-16263. Update BUILDING.txt with macOS native build instructions. 
(weichiu: rev 4fecc2a95e2bd7a4f5ba0b930f1bd6be7227d1b5)
* (edit) BUILDING.txt


> Update BUILDING.txt with macOS native build instructions
> 
>
> Key: HADOOP-16263
> URL: https://issues.apache.org/jira/browse/HADOOP-16263
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16263.001.patch, HADOOP-16263.002.patch, 
> HADOOP-16263.003.patch
>
>
> I recently tried to compile Hadoop native on a Mac and found a few catches, 
> involving fixing some YARN native compiling issues (YARN-8622, YARN-9487).
> Also, need to specify OpenSSL (brewed) header include dir when building 
> native with maven on a Mac. Should update BUILDING.txt for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292682953
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeDeleteRequest.java
 ##
 @@ -0,0 +1,195 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeDeleteResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.DeleteVolumeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+/**
+ * Handles volume delete request.
+ */
+public class OMVolumeDeleteRequest extends OMClientRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeDeleteRequest.class);
+
+  public OMVolumeDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+DeleteVolumeRequest deleteVolumeRequest =
+getOmRequest().getDeleteVolumeRequest();
+Preconditions.checkNotNull(deleteVolumeRequest);
+
+String volume = deleteVolumeRequest.getVolumeName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeDeletes();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.CreateVolume).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.DELETE, volume,
+null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Volume deletion failed for volume:{}", volume, ex);
+  omMetrics.incNumVolumeDeleteFails();
+  auditLog(auditLogger, buildAuditMessage(OMAction.DELETE_VOLUME,
+  buildVolumeAuditMap(volume), ex, userInfo));
+  return new OMVolumeCreateResponse(null, null,
+  createErrorOMResponse(omResponse, ex));
+}
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+OmVolumeArgs omVolumeArgs = null;
+try {
+  omVolumeArgs = getVolumeInfo(omMetadataManager, volume);
 
 Review comment:
   We cannot do that, because we cannot acquire user lock, by holding volume 
lock.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

[GitHub] [hadoop] steveloughran commented on issue #843: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-06-11 Thread GitBox
steveloughran commented on issue #843: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/843#issuecomment-501043896
 
 
   I've just pushed up a new version which includes the checkstyle complaints 
and the various nits reported. Tested s3 ireland. All good except for the new 
test where I'm trying to create an inconsistency in the table to cause problems 
in prune. I think I'm going to put that to one side right now and worry about 
that in some tests for fsck in a subsequent patch
   
   After this iteration I'm going to have to move to a new PR; yetus is unhappy 
again.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.

2019-06-11 Thread GitBox
bharatviswa504 commented on a change in pull request #884: HDDS-1620. Implement 
Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r292681344
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -0,0 +1,188 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.volume;
+
+import java.io.IOException;
+import java.util.Map;
+
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeSetOwnerResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetVolumePropertyResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle set owner request for volume.
+ */
+public class OMVolumeSetOwnerRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeSetOwnerRequest.class);
+
+  public OMVolumeSetOwnerRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+SetVolumePropertyRequest setVolumePropertyRequest =
+getOmRequest().getSetVolumePropertyRequest();
+
+Preconditions.checkNotNull(setVolumePropertyRequest);
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.SetVolumeProperty).setStatus(
+OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+
+// In production this will never happen, this request will be called only
+// when we have ownerName in setVolumePropertyRequest.
+if (!setVolumePropertyRequest.hasOwnerName()) {
+  omResponse.setStatus(OzoneManagerProtocolProtos.Status.INVALID_REQUEST)
+  .setSuccess(false);
+  return new OMVolumeSetOwnerResponse(null, null, null, null,
+  omResponse.build());
+}
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumVolumeUpdates();
+String volume = setVolumePropertyRequest.getVolumeName();
+String newOwner = setVolumePropertyRequest.getOwnerName();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildVolumeAuditMap(volume);
+auditMap.put(OzoneConsts.OWNER, newOwner);
+try {
+  // check Acl
+  if (ozoneManager.getAclsEnabled()) {
+checkAcls(ozoneManager, OzoneObj.ResourceType.VOLUME,
+OzoneObj.StoreType.OZONE, IAccessAuthorizer.ACLType.WRITE_ACL,
+volume, null, null);
+  }
+} catch (IOException ex) {
+  LOG.error("Changing volume ownership failed for user:{} volume:{}",
+  newOwner, volume);
+  omMetrics.incNumVolumeUpdateFails();
+  auditLog(auditLogger, 

  1   2   3   >