[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #1425: HDDS-2981 Add unit tests for Proto [de]serialization

2020-10-06 Thread GitBox


adoroszlai commented on a change in pull request #1425:
URL: https://github.com/apache/hadoop-ozone/pull/1425#discussion_r499007546



##
File path: 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestInstanceHelper.java
##
@@ -0,0 +1,67 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import com.google.protobuf.ByteString;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+
+
+/**
+ * Test TestInstanceHelper.
+ *
+ * Utility methods to create test instances of protobuf related classes
+ */
+public final class TestInstanceHelper {
+
+  private TestInstanceHelper(){
+super();
+  }
+
+  public static OzoneManagerProtocolProtos.OzoneAclInfo buildTestOzoneAclInfo(
+  String aclString){
+OzoneAcl oacl = OzoneAcl.parseAcl(aclString);
+ByteString rights = ByteString.copyFrom(oacl.getAclBitSet().toByteArray());
+return OzoneManagerProtocolProtos.OzoneAclInfo.newBuilder()
+.setType(OzoneManagerProtocolProtos.OzoneAclInfo.OzoneAclType.USER)
+.setName(oacl.getName())
+.setRights(rights)
+.setAclScope(OzoneManagerProtocolProtos.
+OzoneAclInfo.OzoneAclScope.ACCESS)
+.build();
+  }
+
+  public static HddsProtos.KeyValue getDefaultTestMetadata(
+  String key, String value) {
+return HddsProtos.KeyValue.newBuilder()
+.setKey(key)
+.setValue(value)
+.build();
+  }
+
+  public static OzoneManagerProtocolProtos.PrefixInfo getDefaultTestPrefixInfo(

Review comment:
   I don't think there is much "default" in these `getDefaultTest...` 
methods, as (most) data still has to be provided.  They are just factory 
methods wrapping the builders, trading flexibility and readability for slightly 
shorter code.  (Builder is more flexible since it can accept eg. multiple ACLs, 
and more readable because each argument is passed to a named method instead of 
a bunch of arguments to a single constructor.)
   
   Instead of such generic factory methods, I suggest adding:
   
* ones that create objects with specific properties relevant for the tests 
* ones that fill irrelevant properties with random data (eg. for the 
metadata key-value pair)

##
File path: 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestInstanceHelper.java
##
@@ -0,0 +1,67 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import com.google.protobuf.ByteString;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+
+
+/**
+ * Test TestInstanceHelper.
+ *
+ * Utility methods to create test instances of protobuf related classes
+ */
+public final class TestInstanceHelper {
+
+  private TestInstanceHelper(){
+super();
+  }
+
+  public static OzoneManagerProtocolProtos.OzoneAclInfo buildTestOzoneAclInfo(
+  String aclString){
+OzoneAcl oacl = OzoneAcl.parseAcl(aclString);
+ByteString rights = ByteString.copyFrom(oacl.getAclBitSet().toByteArray());
+return OzoneManagerProtocolProtos.OzoneAclInfo.newBuilder()
+.setType(OzoneManagerProtocolProtos.OzoneAclInfo.OzoneAclType.USER)
+

[GitHub] [hadoop-ozone] vivekratnavel commented on pull request #1481: HDDS-4316. Upgrade to angular 1.8.0 due to CVE-2020-7676

2020-10-06 Thread GitBox


vivekratnavel commented on pull request #1481:
URL: https://github.com/apache/hadoop-ozone/pull/1481#issuecomment-704705403


   @dineshchitlangia Good catch! Thanks for the review!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant edited a comment on pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-10-06 Thread GitBox


bshashikant edited a comment on pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#issuecomment-704702172


   > @bshashikant Thanks for suggestions. Actually, RandomLeaderChoosePolicy 
does not choose datanode, it return null in 
[chooseLeader](https://github.com/apache/hadoop-ozone/blob/f573b4a8b45f4463f0bcb95971a45789bda91d5c/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/leader/choose/algorithms/RandomLeaderChoosePolicy.java#L40),
 then all the datanodes are assigned the [same 
priority](https://github.com/apache/hadoop-ozone/blob/f573b4a8b45f4463f0bcb95971a45789bda91d5c/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CreatePipelineCommand.java#L60),
 as what is currently. The name of RandomLeaderChoosePolicy seems confused, 
sorry for the misleading, do you have better name?
   
   i guess , this can be named as "DefaultLeaderChoosePolicy" and this should 
be made the default , until and unless we measure the performance with the 
minimumLeader election count policy and see the results. What do you think?
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-10-06 Thread GitBox


bshashikant commented on pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#issuecomment-704702172


   > @bshashikant Thanks for suggestions. Actually, RandomLeaderChoosePolicy 
does not choose datanode, it return null in 
[chooseLeader](https://github.com/apache/hadoop-ozone/blob/f573b4a8b45f4463f0bcb95971a45789bda91d5c/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/leader/choose/algorithms/RandomLeaderChoosePolicy.java#L40),
 then all the datanodes are assigned the [same 
priority](https://github.com/apache/hadoop-ozone/blob/f573b4a8b45f4463f0bcb95971a45789bda91d5c/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CreatePipelineCommand.java#L60),
 as what is currently. The name of RandomLeaderChoosePolicy seems confused, 
sorry for the misleading, do you have better name?
   
   i guess , this can be named as "DefaultLeaderChoosePolicy" and this should 
be made the default , until and unless we measure the performance with the 
minimumLeader election count policy and see the results.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #1478: Fix inconsistency recon config keys starting with recon and not ozone

2020-10-06 Thread GitBox


adoroszlai commented on a change in pull request #1478:
URL: https://github.com/apache/hadoop-ozone/pull/1478#discussion_r500733595



##
File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/OzoneManagerServiceProviderImpl.java
##
@@ -205,12 +205,12 @@ public void start() {
 }
 reconTaskController.start();
 long initialDelay = configuration.getTimeDuration(
-RECON_OM_SNAPSHOT_TASK_INITIAL_DELAY,
-RECON_OM_SNAPSHOT_TASK_INITIAL_DELAY_DEFAULT,
+OZONE_RECON_OM_SNAPSHOT_TASK_INITIAL_DELAY,
+OZONE_RECON_OM_SNAPSHOT_TASK_INITIAL_DELAY_DEFAULT,

Review comment:
   ```suggestion
   OZONE_RECON_OM_SNAPSHOT_TASK_INITIAL_DELAY,
   configuration.get(
   ReconServerConfigKeys.RECON_OM_SNAPSHOT_TASK_INITIAL_DELAY,
   OZONE_RECON_OM_SNAPSHOT_TASK_INITIAL_DELAY_DEFAULT),
   ```

##
File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/ReconServerConfigKeys.java
##
@@ -60,31 +60,31 @@
 
   public static final String RECON_STORAGE_DIR = "recon";
 
-  public static final String RECON_OM_SOCKET_TIMEOUT =
-  "recon.om.socket.timeout";
-  public static final String RECON_OM_SOCKET_TIMEOUT_DEFAULT = "5s";
+  public static final String OZONE_RECON_OM_SOCKET_TIMEOUT =
+  "ozone.recon.om.socket.timeout";
+  public static final String OZONE_RECON_OM_SOCKET_TIMEOUT_DEFAULT = "5s";
 
-  public static final String RECON_OM_CONNECTION_TIMEOUT =
-  "recon.om.connection.timeout";
-  public static final String RECON_OM_CONNECTION_TIMEOUT_DEFAULT = "5s";
+  public static final String OZONE_RECON_OM_CONNECTION_TIMEOUT =
+  "ozone.recon.om.connection.timeout";
+  public static final String OZONE_RECON_OM_CONNECTION_TIMEOUT_DEFAULT = "5s";
 
-  public static final String RECON_OM_CONNECTION_REQUEST_TIMEOUT =
-  "recon.om.connection.request.timeout";
+  public static final String OZONE_RECON_OM_CONNECTION_REQUEST_TIMEOUT =
+  "ozone.recon.om.connection.request.timeout";
 
-  public static final String RECON_OM_CONNECTION_REQUEST_TIMEOUT_DEFAULT = 
"5s";
+  public static final String OZONE_RECON_OM_CONNECTION_REQUEST_TIMEOUT_DEFAULT 
= "5s";
 
-  public static final String RECON_OM_SNAPSHOT_TASK_INITIAL_DELAY =
-  "recon.om.snapshot.task.initial.delay";
+  public static final String OZONE_RECON_OM_SNAPSHOT_TASK_INITIAL_DELAY =
+  "ozone.recon.om.snapshot.task.initial.delay";

Review comment:
   ```suggestion
 public static final String OZONE_RECON_OM_SNAPSHOT_TASK_INITIAL_DELAY =
 "ozone.recon.om.snapshot.task.initial.delay";
 @Deprecated
 public static final String RECON_OM_SNAPSHOT_TASK_INITIAL_DELAY = 
 "recon.om.snapshot.task.initial.delay";
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #1481: HDDS-4316. Upgrade to angular 1.8.0 due to CVE-2020-7676

2020-10-06 Thread GitBox


dineshchitlangia commented on a change in pull request #1481:
URL: https://github.com/apache/hadoop-ozone/pull/1481#discussion_r500704979



##
File path: hadoop-ozone/pom.xml
##
@@ -294,7 +294,7 @@
 src/test/resources/ssl/*
 
src/main/compose/ozonesecure/docker-image/runner/build/apache-rat-0.12/README-CLI.txt
 
src/main/compose/ozonesecure/docker-image/runner/build/apache-rat-0.12/README-ANT.txt
-webapps/static/angular-1.7.9.min.js
+webapps/static/angular-1.8.0.min.js
 webapps/static/angular-nvd3-1.0.9.min.js
 webapps/static/angular-route-1.7.9.min.js

Review comment:
   ```suggestion
   webapps/static/angular-route-1.8.0.min.js
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-06 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r500677311



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCommitResponseV1.java
##
@@ -0,0 +1,61 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.*;
+
+/**
+ * Response for CommitKey request layout version V1.
+ */
+@CleanupTableInfo(cleanupTables = {OPEN_FILE_TABLE, FILE_TABLE})
+public class OMKeyCommitResponseV1 extends OMKeyCommitResponse {
+
+  private OmKeyInfo omKeyInfo;
+  private String ozoneKeyName;
+  private String openKeyName;
+
+  public OMKeyCommitResponseV1(@Nonnull OMResponse omResponse,
+   @Nonnull OmKeyInfo omKeyInfo,
+   String ozoneKeyName, String openKeyName,
+   @Nonnull OmVolumeArgs omVolumeArgs,
+   @Nonnull OmBucketInfo omBucketInfo) {
+super(omResponse, omKeyInfo, ozoneKeyName, openKeyName, omVolumeArgs,
+omBucketInfo);
+this.omKeyInfo = omKeyInfo;

Review comment:
   These 3 are set, not used anywhere.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-06 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r500676583



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequestV1.java
##
@@ -0,0 +1,260 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCommitResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCommitResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.*;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.KEY_NOT_FOUND;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles CommitKey request.
+ */
+public class OMKeyCommitRequestV1 extends OMKeyCommitRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyCommitRequestV1.class);
+
+  public OMKeyCommitRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  @SuppressWarnings("methodlength")
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CommitKeyRequest commitKeyRequest = getOmRequest().getCommitKeyRequest();
+
+KeyArgs commitKeyArgs = commitKeyRequest.getKeyArgs();
+
+String volumeName = commitKeyArgs.getVolumeName();
+String bucketName = commitKeyArgs.getBucketName();
+String keyName = commitKeyArgs.getKeyName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumKeyCommits();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+
+Map auditMap = buildKeyArgsAuditMap(commitKeyArgs);
+
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+
+IOException exception = null;
+OmKeyInfo omKeyInfo = null;
+OmVolumeArgs omVolumeArgs = null;
+OmBucketInfo omBucketInfo = null;
+OMClientResponse omClientResponse = null;
+boolean bucketLockAcquired = false;
+Result result;
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+try {
+  commitKeyArgs = resolveBucketLink(ozoneManager, commitKeyArgs, auditMap);
+  volumeName = commitKeyArgs.getVolumeName();
+  bucketName = commitKeyArgs.getBucketName();
+
+  // check Acl
+  checkKeyAclsInOpenKeyTable(ozoneManager, volumeName, bucketName,
+  keyName, IAccessAuthorizer.ACLType.WRITE,
+  commitKeyRequest.getClientID());
+
+
+  String bucketKey = omMetadataManager.getBucketKey(volumeName, 
bucketName);
+  Iterator pathComponents = Paths.get(keyName).iterator();
+  String dbOpenFileKey = null;
+
+  List locationInfoList = new ArrayList<>();
+  for (KeyLocation keyLocation : commitKeyArgs.getKeyLocationsList()) {
+locationInfoList.add(OmKeyLocationInfo.getFromProtobuf(keyLocation));
+  }
+
+  bucketLockAcquired =
+  omMetadataManager.getLock().acquireLock(BUCKET_LOCK,
+  

[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-06 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r500676200



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequestV1.java
##
@@ -0,0 +1,260 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCommitResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCommitResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.*;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.KEY_NOT_FOUND;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles CommitKey request.
+ */
+public class OMKeyCommitRequestV1 extends OMKeyCommitRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyCommitRequestV1.class);
+
+  public OMKeyCommitRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  @SuppressWarnings("methodlength")
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CommitKeyRequest commitKeyRequest = getOmRequest().getCommitKeyRequest();
+
+KeyArgs commitKeyArgs = commitKeyRequest.getKeyArgs();
+
+String volumeName = commitKeyArgs.getVolumeName();
+String bucketName = commitKeyArgs.getBucketName();
+String keyName = commitKeyArgs.getKeyName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumKeyCommits();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+
+Map auditMap = buildKeyArgsAuditMap(commitKeyArgs);
+
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+
+IOException exception = null;
+OmKeyInfo omKeyInfo = null;
+OmVolumeArgs omVolumeArgs = null;
+OmBucketInfo omBucketInfo = null;
+OMClientResponse omClientResponse = null;
+boolean bucketLockAcquired = false;
+Result result;
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+try {
+  commitKeyArgs = resolveBucketLink(ozoneManager, commitKeyArgs, auditMap);
+  volumeName = commitKeyArgs.getVolumeName();
+  bucketName = commitKeyArgs.getBucketName();
+
+  // check Acl
+  checkKeyAclsInOpenKeyTable(ozoneManager, volumeName, bucketName,
+  keyName, IAccessAuthorizer.ACLType.WRITE,
+  commitKeyRequest.getClientID());
+
+
+  String bucketKey = omMetadataManager.getBucketKey(volumeName, 
bucketName);
+  Iterator pathComponents = Paths.get(keyName).iterator();
+  String dbOpenFileKey = null;
+
+  List locationInfoList = new ArrayList<>();
+  for (KeyLocation keyLocation : commitKeyArgs.getKeyLocationsList()) {
+locationInfoList.add(OmKeyLocationInfo.getFromProtobuf(keyLocation));
+  }
+
+  bucketLockAcquired =
+  omMetadataManager.getLock().acquireLock(BUCKET_LOCK,
+  

[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-06 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r500674939



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequestV1.java
##
@@ -0,0 +1,260 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCommitResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCommitResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.*;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.KEY_NOT_FOUND;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles CommitKey request.
+ */
+public class OMKeyCommitRequestV1 extends OMKeyCommitRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyCommitRequestV1.class);
+
+  public OMKeyCommitRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  @SuppressWarnings("methodlength")
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CommitKeyRequest commitKeyRequest = getOmRequest().getCommitKeyRequest();
+
+KeyArgs commitKeyArgs = commitKeyRequest.getKeyArgs();
+
+String volumeName = commitKeyArgs.getVolumeName();
+String bucketName = commitKeyArgs.getBucketName();
+String keyName = commitKeyArgs.getKeyName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumKeyCommits();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+
+Map auditMap = buildKeyArgsAuditMap(commitKeyArgs);
+
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+
+IOException exception = null;
+OmKeyInfo omKeyInfo = null;
+OmVolumeArgs omVolumeArgs = null;
+OmBucketInfo omBucketInfo = null;
+OMClientResponse omClientResponse = null;
+boolean bucketLockAcquired = false;
+Result result;
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+try {
+  commitKeyArgs = resolveBucketLink(ozoneManager, commitKeyArgs, auditMap);
+  volumeName = commitKeyArgs.getVolumeName();
+  bucketName = commitKeyArgs.getBucketName();
+
+  // check Acl
+  checkKeyAclsInOpenKeyTable(ozoneManager, volumeName, bucketName,
+  keyName, IAccessAuthorizer.ACLType.WRITE,
+  commitKeyRequest.getClientID());
+
+
+  String bucketKey = omMetadataManager.getBucketKey(volumeName, 
bucketName);
+  Iterator pathComponents = Paths.get(keyName).iterator();
+  String dbOpenFileKey = null;
+
+  List locationInfoList = new ArrayList<>();
+  for (KeyLocation keyLocation : commitKeyArgs.getKeyLocationsList()) {
+locationInfoList.add(OmKeyLocationInfo.getFromProtobuf(keyLocation));
+  }
+
+  bucketLockAcquired =
+  omMetadataManager.getLock().acquireLock(BUCKET_LOCK,

Review comment:
   

[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-06 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r500674381



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequestV1.java
##
@@ -0,0 +1,260 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCommitResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCommitResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.*;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.KEY_NOT_FOUND;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles CommitKey request.
+ */
+public class OMKeyCommitRequestV1 extends OMKeyCommitRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyCommitRequestV1.class);
+
+  public OMKeyCommitRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  @SuppressWarnings("methodlength")
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CommitKeyRequest commitKeyRequest = getOmRequest().getCommitKeyRequest();
+
+KeyArgs commitKeyArgs = commitKeyRequest.getKeyArgs();
+
+String volumeName = commitKeyArgs.getVolumeName();
+String bucketName = commitKeyArgs.getBucketName();
+String keyName = commitKeyArgs.getKeyName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumKeyCommits();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+
+Map auditMap = buildKeyArgsAuditMap(commitKeyArgs);
+
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+
+IOException exception = null;
+OmKeyInfo omKeyInfo = null;
+OmVolumeArgs omVolumeArgs = null;
+OmBucketInfo omBucketInfo = null;
+OMClientResponse omClientResponse = null;
+boolean bucketLockAcquired = false;
+Result result;
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+try {
+  commitKeyArgs = resolveBucketLink(ozoneManager, commitKeyArgs, auditMap);
+  volumeName = commitKeyArgs.getVolumeName();
+  bucketName = commitKeyArgs.getBucketName();
+
+  // check Acl
+  checkKeyAclsInOpenKeyTable(ozoneManager, volumeName, bucketName,
+  keyName, IAccessAuthorizer.ACLType.WRITE,
+  commitKeyRequest.getClientID());
+
+
+  String bucketKey = omMetadataManager.getBucketKey(volumeName, 
bucketName);
+  Iterator pathComponents = Paths.get(keyName).iterator();
+  String dbOpenFileKey = null;
+
+  List locationInfoList = new ArrayList<>();
+  for (KeyLocation keyLocation : commitKeyArgs.getKeyLocationsList()) {
+locationInfoList.add(OmKeyLocationInfo.getFromProtobuf(keyLocation));
+  }
+
+  bucketLockAcquired =
+  omMetadataManager.getLock().acquireLock(BUCKET_LOCK,
+  

[GitHub] [hadoop-ozone] vivekratnavel commented on pull request #1478: Fix inconsistency recon config keys starting with recon and not ozone

2020-10-06 Thread GitBox


vivekratnavel commented on pull request #1478:
URL: https://github.com/apache/hadoop-ozone/pull/1478#issuecomment-704627172


   @frischHWC Thanks for working on this! The patch looks good to me except the 
failing checkstyles. Please fix those longer lines and I can merge it.
   
   ```
   
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/ReconServerConfigKeys.java
74: Line is longer than 80 characters (found 86).
   
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/OzoneManagerServiceProviderImpl.java
128: Line is longer than 80 characters (found 82).
   
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/PrometheusServiceProviderImpl.java
68: Line is longer than 80 characters (found 84).
   
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/recon/TestReconWithOzoneManager.java
99: Line is longer than 80 characters (found 82).
   
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java
72: Line is longer than 80 characters (found 82).
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel opened a new pull request #1481: HDDS-4316. Upgrade to angular 1.8.0 due to CVE-2020-7676

2020-10-06 Thread GitBox


vivekratnavel opened a new pull request #1481:
URL: https://github.com/apache/hadoop-ozone/pull/1481


   ## What changes were proposed in this pull request?
   
   Upgrade angular from 1.7.9 -> 1.8.0
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4316
   
   ## How was this patch tested?
   
   Tested with docker-compose and verified that all UIs work fine as expected 
after the upgrade.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4316) Upgrade to angular 1.8.0 due to CVE-2020-7676

2020-10-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4316:
-
Labels: pull-request-available  (was: )

> Upgrade to angular 1.8.0 due to CVE-2020-7676
> -
>
> Key: HDDS-4316
> URL: https://issues.apache.org/jira/browse/HDDS-4316
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Affects Versions: 1.0.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>
> Angular versions < 1.8.0 are vulnerable to cross-site scripting
> [https://nvd.nist.gov/vuln/detail/CVE-2020-7676]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4316) Upgrade to angular 1.8.0 due to CVE-2020-7676

2020-10-06 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-4316:
-
Status: Patch Available  (was: Open)

> Upgrade to angular 1.8.0 due to CVE-2020-7676
> -
>
> Key: HDDS-4316
> URL: https://issues.apache.org/jira/browse/HDDS-4316
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Affects Versions: 1.0.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>
> Angular versions < 1.8.0 are vulnerable to cross-site scripting
> [https://nvd.nist.gov/vuln/detail/CVE-2020-7676]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-06 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r500642282



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequestV1.java
##
@@ -0,0 +1,289 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponse;
+import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.*;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.*;
+
+/**
+ * Handles create file request layout version1.
+ */
+public class OMFileCreateRequestV1 extends OMFileCreateRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequestV1.class);
+  public OMFileCreateRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  @SuppressWarnings("methodlength")
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+Map auditMap = buildKeyArgsAuditMap(keyArgs);
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+String keyName = keyArgs.getKeyName();
+
+// if isRecursive is true, file would be created even if parent
+// directories does not exist.
+boolean isRecursive = createFileRequest.getIsRecursive();
+if (LOG.isDebugEnabled()) {
+  LOG.debug("File create for : " + volumeName + "/" + bucketName + "/"
+  + keyName + ":" + isRecursive);
+}
+
+// if isOverWrite is true, file would be over written.
+boolean isOverWrite = createFileRequest.getIsOverwrite();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumCreateFile();
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+boolean acquiredLock = false;
+
+OmVolumeArgs omVolumeArgs = null;
+OmBucketInfo omBucketInfo = null;
+final List locations = new ArrayList<>();
+List missingParentInfos;
+int numKeysCreated = 0;
+
+OMClientResponse omClientResponse = null;
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+IOException exception = null;
+Result result = null;
+try {
+  keyArgs = resolveBucketLink(ozoneManager, keyArgs, auditMap);
+  volumeName = keyArgs.getVolumeName();
+  bucketName = keyArgs.getBucketName();
+
+  if (keyName.length() == 0) {
+// Check if this is the root of the filesystem.
+throw new OMException("Can not write to directory: " + keyName,
+OMException.ResultCodes.NOT_A_FILE);
+  }
+
+  // check Acl
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
+ 

[jira] [Created] (HDDS-4316) Upgrade to angular 1.8.0 due to CVE-2020-7676

2020-10-06 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-4316:


 Summary: Upgrade to angular 1.8.0 due to CVE-2020-7676
 Key: HDDS-4316
 URL: https://issues.apache.org/jira/browse/HDDS-4316
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Affects Versions: 1.0.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Angular versions < 1.8.0 are vulnerable to cross-site scripting

[https://nvd.nist.gov/vuln/detail/CVE-2020-7676]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #1472: HDDS-4306. Ozone checkstyle rule can't be imported to IntelliJ.

2020-10-06 Thread GitBox


xiaoyuyao commented on pull request #1472:
URL: https://github.com/apache/hadoop-ozone/pull/1472#issuecomment-704608201


   Thanks @adoroszlai for the review. The suggested change LGTM, I will make 
change in the next commit. One thing I notice that you suggest use 
LineLengthCheck instead of LineLength module. I think it is typo based on the 
document [here|https://checkstyle.sourceforge.io/config_sizes.html#LineLength]
   
   
 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4315) Use Epoch to generate unique ObjectIDs

2020-10-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4315:
-
Labels: pull-request-available  (was: )

> Use Epoch to generate unique ObjectIDs
> --
>
> Key: HDDS-4315
> URL: https://issues.apache.org/jira/browse/HDDS-4315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>
> In a non-Ratis OM, the transaction index used to generate ObjectID is reset 
> on OM restart. This can lead to duplicate ObjectIDs when the OM is restarted. 
> ObjectIDs should be unique. 
> For HDDS-2939 and NFS are some of the features which depend on ObjectIds 
> being unique.
> This Jira aims to introduce an epoch number in OM which is incremented on OM 
> restarts. The epoch is persisted on disk. This epoch will be used to set the 
> first 16 bits of the objectID to ensure that objectIDs are unique even after 
> OM restart.
> The highest epoch number is reserved for transactions coming through ratis. 
> This will take care of the scenario where OM ratis is enabled on an existing 
> cluster. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru opened a new pull request #1480: HDDS-4315. Use Epoch to generate unique ObjectIDs

2020-10-06 Thread GitBox


hanishakoneru opened a new pull request #1480:
URL: https://github.com/apache/hadoop-ozone/pull/1480


   ## What changes were proposed in this pull request?
   
   In a non-Ratis OM, the transaction index used to generate ObjectID is reset 
on OM restart. This can lead to duplicate ObjectIDs when the OM is restarted. 
ObjectIDs should be unique. 
   For HDDS-2939 and NFS are some of the features which depend on ObjectIds 
being unique.
   
   This Jira aims to introduce an epoch number in OM which is incremented on OM 
restarts. The epoch is persisted on disk. This epoch will be used to set the 
first 16 bits of the objectID to ensure that objectIDs are unique even after OM 
restart.
   
   The highest epoch number is reserved for transactions coming through ratis. 
This will take care of the scenario where OM ratis is enabled on an existing 
cluster. 
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4315
   
   ## How was this patch tested?
   
   Will add unit tests in next commit.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4315) Use Epoch to generate unique ObjectIDs

2020-10-06 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-4315:


 Summary: Use Epoch to generate unique ObjectIDs
 Key: HDDS-4315
 URL: https://issues.apache.org/jira/browse/HDDS-4315
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


In a non-Ratis OM, the transaction index used to generate ObjectID is reset on 
OM restart. This can lead to duplicate ObjectIDs when the OM is restarted. 
ObjectIDs should be unique. 
For HDDS-2939 and NFS are some of the features which depend on ObjectIds being 
unique.

This Jira aims to introduce an epoch number in OM which is incremented on OM 
restarts. The epoch is persisted on disk. This epoch will be used to set the 
first 16 bits of the objectID to ensure that objectIDs are unique even after OM 
restart.

The highest epoch number is reserved for transactions coming through ratis. 
This will take care of the scenario where OM ratis is enabled on an existing 
cluster. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4314) OM Layout Version Manager init throws silent CNF error in integration tests.

2020-10-06 Thread Aravindan Vijayan (Jira)
Aravindan Vijayan created HDDS-4314:
---

 Summary: OM Layout Version Manager init throws silent CNF error in 
integration tests.
 Key: HDDS-4314
 URL: https://issues.apache.org/jira/browse/HDDS-4314
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Aravindan Vijayan
 Fix For: 1.1.0


{code}
org.reflections.ReflectionsException: could not get type for name mockit.MockUp
at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:312)
at org.reflections.Reflections.expandSuperTypes(Reflections.java:382)
at org.reflections.Reflections.(Reflections.java:140)
at org.reflections.Reflections.(Reflections.java:182)
at org.reflections.Reflections.(Reflections.java:155)
at 
org.apache.hadoop.ozone.om.upgrade.OMLayoutVersionManagerImpl.registerOzoneManagerRequests(OMLayoutVersionManagerImpl.java:122)
at 
org.apache.hadoop.ozone.om.upgrade.OMLayoutVersionManagerImpl.init(OMLayoutVersionManagerImpl.java:100)
at 
org.apache.hadoop.ozone.om.upgrade.OMLayoutVersionManagerImpl.initialize(OMLayoutVersionManagerImpl.java:83)
at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:363)
at 
org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:930)
at 
org.apache.hadoop.ozone.MiniOzoneHAClusterImpl$Builder.createOMService(MiniOzoneHAClusterImpl.java:379)
at 
org.apache.hadoop.ozone.MiniOzoneHAClusterImpl$Builder.build(MiniOzoneHAClusterImpl.java:294)
at 
org.apache.hadoop.ozone.om.TestOzoneManagerHA.init(TestOzoneManagerHA.java:147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.lang.ClassNotFoundException: mockit.MockUp
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:310)
... 23 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx merged pull request #1457: HDDS-4253. Add LayoutVersion request/response for DN registration.

2020-10-06 Thread GitBox


avijayanhwx merged pull request #1457:
URL: https://github.com/apache/hadoop-ozone/pull/1457


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4253) SCM changes to process Layout Info in register request/response

2020-10-06 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan resolved HDDS-4253.
-
Resolution: Fixed

PR Merged.

> SCM changes to process Layout Info in register request/response
> ---
>
> Key: HDDS-4253
> URL: https://issues.apache.org/jira/browse/HDDS-4253
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Prashant Pogde
>Assignee: Prashant Pogde
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4253) SCM changes to process Layout Info in register request/response

2020-10-06 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-4253:

Description: Add LayoutVersion request/response for DN registration.

> SCM changes to process Layout Info in register request/response
> ---
>
> Key: HDDS-4253
> URL: https://issues.apache.org/jira/browse/HDDS-4253
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Prashant Pogde
>Assignee: Prashant Pogde
>Priority: Major
>  Labels: pull-request-available
>
> Add LayoutVersion request/response for DN registration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #1479: HDDS-4313. Create thread-local instance of FileSystem in HadoopFsGenerator

2020-10-06 Thread GitBox


adoroszlai opened a new pull request #1479:
URL: https://github.com/apache/hadoop-ozone/pull/1479


   ## What changes were proposed in this pull request?
   
   1. Create a separate instance of `FileSystem` in `HadoopFsGenerator` for 
each test thread.
   2. Move directory creation to test setup.  Previously `mkdirs()` resulted in 
an extra RPC for each file.
   3. Provide a hook method in `BaseFreonGenerator` for thread-specific cleanup.
   
   https://issues.apache.org/jira/browse/HDDS-4313
   
   ## How was this patch tested?
   
   ```
   ozone freon ockg -n1 -t1 -p warmup
   ozone freon dfsg -n1 -t10
   ```
   
   ```
   /opt/profiler/profiler.sh -e lock -d 60 -o svg -f dfsg-lock.svg $(ps aux | 
grep 'proc_freo[n]' | awk '{ print $2 }')
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4313) Create thread-local instance of FileSystem in HadoopFsGenerator

2020-10-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4313:
-
Labels: pull-request-available  (was: )

> Create thread-local instance of FileSystem in HadoopFsGenerator
> ---
>
> Key: HDDS-4313
> URL: https://issues.apache.org/jira/browse/HDDS-4313
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: freon
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> [~elek]'s most recent performance test uncovered a client-side bottleneck in 
> Freon's Hadoop FS generator: a global {{FileSystem}} instance causes lock 
> contention among test threads.
> https://github.com/elek/ozone-notes/blob/master/static/results/23_hcfs_write/profile.svg



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4313) Create thread-local instance of FileSystem in HadoopFsGenerator

2020-10-06 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-4313:
--

 Summary: Create thread-local instance of FileSystem in 
HadoopFsGenerator
 Key: HDDS-4313
 URL: https://issues.apache.org/jira/browse/HDDS-4313
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: freon
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


[~elek]'s most recent performance test uncovered a client-side bottleneck in 
Freon's Hadoop FS generator: a global {{FileSystem}} instance causes lock 
contention among test threads.

https://github.com/elek/ozone-notes/blob/master/static/results/23_hcfs_write/profile.svg



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] frischHWC opened a new pull request #1478: Fix inconsistenc recon config keys starting with recon and not ozone

2020-10-06 Thread GitBox


frischHWC opened a new pull request #1478:
URL: https://github.com/apache/hadoop-ozone/pull/1478


   ## What changes were proposed in this pull request?
   Fix recon configs inconsistent
   
   ## What is the link to the Apache JIRA
   
   
https://issues.apache.org/jira/browse/HDDS-4309?jql=project%20in%20(HDDS)%20AND%20labels%20in%20(newbie)%20AND%20assignee%20is%20EMPTY%20AND%20status%20in%20(open%2C%20Reopened)
   
   ## How was this patch tested?
   
   Build up project, launch dockers, and new default parameters were present 
but not old ones anymore.
   
   On recon container:
   ```
   > ozone getconf confKey recon.om.socket.timeout
   Configuration recon.om.socket.timeout is missing.
   > ozone getconf confKey ozone.recon.om.socket.timeout
   5s
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4309) Fix inconsistent Recon config keys that start with "recon.om."

2020-10-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDDS-4309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

François Risch reassigned HDDS-4309:


Assignee: François Risch

> Fix inconsistent Recon config keys that start with "recon.om."
> --
>
> Key: HDDS-4309
> URL: https://issues.apache.org/jira/browse/HDDS-4309
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 1.0.0
>Reporter: Aravindan Vijayan
>Assignee: François Risch
>Priority: Minor
>  Labels: newbie
>
> {code}
> hadoop-hdds/common/src/main/resources/ozone-default.xml
> 2318:recon.om.connection.request.timeout
> 2327:recon.om.connection.timeout
> 2336:recon.om.socket.timeout
> 2345:recon.om.snapshot.task.initial.delay
> 2353:recon.om.snapshot.task.interval.delay
> 2361:recon.om.snapshot.task.flush.param
> {code}
> These need to be deprecated and changed to "ozone.recon.om.<>".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4311) Type-safe config design doc points to OM HA

2020-10-06 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4311:
---
Status: Patch Available  (was: In Progress)

> Type-safe config design doc points to OM HA
> ---
>
> Key: HDDS-4311
> URL: https://issues.apache.org/jira/browse/HDDS-4311
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>
> Abstract and links for 
> http://hadoop.apache.org/ozone/docs/1.0.0/design/typesafeconfig.html are 
> wrong, reference OM HA design doc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] prashantpogde commented on a change in pull request #1457: HDDS-4253. Add LayoutVersion request/response for DN registration.

2020-10-06 Thread GitBox


prashantpogde commented on a change in pull request #1457:
URL: https://github.com/apache/hadoop-ozone/pull/1457#discussion_r500475653



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
##
@@ -240,8 +247,19 @@ public VersionResponse getVersion(SCMVersionRequestProto 
versionRequest) {
   @Override
   public RegisteredCommand register(
   DatanodeDetails datanodeDetails, NodeReportProto nodeReport,
-  PipelineReportsProto pipelineReportsProto) {
-
+  PipelineReportsProto pipelineReportsProto,
+  LayoutVersionProto layoutInfo) {
+
+if (layoutInfo != null) {

Review comment:
   I would still need to check for condition where the argument is not 
valid.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-06 Thread GitBox


rakeshadr commented on pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#issuecomment-704430445


   Thanks a lot @linyiqun for the review comments and I have updated PR 
addressing the same. Please take another look at it when you get a chance.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-06 Thread GitBox


rakeshadr commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r500470079



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
##
@@ -413,7 +461,8 @@ public KeyInfo getProtobuf(boolean ignorePipeline) {
 .addAllMetadata(KeyValueUtil.toProtobuf(metadata))
 .addAllAcls(OzoneAclUtil.toProtobuf(acls))
 .setObjectID(objectID)
-.setUpdateID(updateID);
+.setUpdateID(updateID)
+.setParentID(parentObjectID);

Review comment:
   I am not persisting fileName, which is already the last name in the path 
component(keyName). I have added logic to prepare filename from Keyname. Hope 
this is fine?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4312) findbugs check succeeds despite compile error

2020-10-06 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4312:
---
Status: Patch Available  (was: Open)

> findbugs check succeeds despite compile error
> -
>
> Key: HDDS-4312
> URL: https://issues.apache.org/jira/browse/HDDS-4312
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.1.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> Findbugs check has been silently failing but reporting success for some time 
> now.  The problem is that {{findbugs.sh}} determines exit code based on the 
> number of findbugs failures.  If {{compile}} step fails, exit code is 0, ie. 
> success.
> {code:title=https://github.com/apache/hadoop-ozone/runs/1210535433#step:3:866}
> 2020-10-02T18:37:57.0699502Z [ERROR] Failed to execute goal on project 
> hadoop-hdds-client: Could not resolve dependencies for project 
> org.apache.hadoop:hadoop-hdds-client:jar:1.1.0-SNAPSHOT: Could not find 
> artifact org.apache.hadoop:hadoop-hdds-common:jar:tests:1.1.0-SNAPSHOT in 
> apache.snapshots.https 
> (https://repository.apache.org/content/repositories/snapshots) -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

2020-10-06 Thread GitBox


errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r500389769



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumOpenKeyDeleteRequests();
+
+OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+getOmRequest().getDeleteOpenKeysRequest();
+
+List submittedOpenKeyBucket =
+deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+.mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+IOException exception = null;
+OMClientResponse omClientResponse = null;
+Result result = null;
+Map deletedOpenKeys = new HashMap<>();
+
+try {
+  // Open keys are grouped by bucket, but there may be multiple buckets
+  // per volume. This maps volume name to volume args to track
+  // all volume updates for this request.
+  Map modifiedVolumes = new HashMap<>();
+  OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+  for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+// For each bucket where keys will be deleted from,
+// get its bucket lock and update the cache accordingly.
+Map deleted = updateOpenKeyTableCache(ozoneManager,
+trxnLogIndex, openKeyBucket);
+
+deletedOpenKeys.putAll(deleted);
+
+// If open keys were deleted from this bucket and its volume still
+// exists, update the volume's byte usage in the cache.
+if (!deleted.isEmpty()) {
+  String volumeName = openKeyBucket.getVolumeName();
+  // 

[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #1477: HDDS-4311. Type-safe config design doc points to OM HA

2020-10-06 Thread GitBox


adoroszlai opened a new pull request #1477:
URL: https://github.com/apache/hadoop-ozone/pull/1477


   ## What changes were proposed in this pull request?
   
   Fix Jira, abstract and the link to the doc attached to HDDS-1466.
   
   https://issues.apache.org/jira/browse/HDDS-4311
   
   ## How was this patch tested?
   
   ```
   mvn -pl :hadoop-hdds-docs clean package
   open hadoop-hdds/docs/target/classes/docs/design/typesafeconfig.html
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

2020-10-06 Thread GitBox


errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r500393541



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##
@@ -0,0 +1,150 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Response for DeleteKey request.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+  @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+super(omResponse);
+this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+super(omResponse);
+checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void deleteFromTable(
+  OMMetadataManager omMetadataManager,
+  BatchOperation batchOperation,
+  Table fromTable,
+  String keyName,
+  OmKeyInfo omKeyInfo) throws IOException {
+
+deleteFromTable(omMetadataManager, batchOperation, fromTable, keyName,
+omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void deleteFromTable(
+  OMMetadataManager omMetadataManager,
+  BatchOperation batchOperation,
+  Table fromTable,
+  String keyName,
+  OmKeyInfo omKeyInfo, long trxnLogIndex) throws IOException {
+
+// For OmResponse with failure, this should do nothing. This method is
+// not called in failure scenario in OM code.
+fromTable.deleteWithBatch(batchOperation, keyName);
+
+// If Key is not empty add this to delete table.
+if (!isKeyEmpty(omKeyInfo)) {
+  // If a deleted key is put in the table where a key with the same
+  // name already exists, then the old deleted key information would be
+  // lost. To avoid this, first check if a key with same name exists.
+  // deletedTable in OM Metadata stores .
+  // The RepeatedOmKeyInfo is the structure that allows us to store a
+  // list of OmKeyInfo that can be tied to same key name. For a keyName
+  // if RepeatedOMKeyInfo structure is null, we create a new instance,
+  // if it is not null, then we simply add to the list and store this
+  // instance in deletedTable.
+  RepeatedOmKeyInfo repeatedOmKeyInfo =
+  omMetadataManager.getDeletedTable().get(keyName);
+  repeatedOmKeyInfo = OmUtils.prepareKeyForDelete(
+  omKeyInfo, repeatedOmKeyInfo, trxnLogIndex,
+  isRatisEnabled);
+  

[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

2020-10-06 Thread GitBox


errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r500391837



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumOpenKeyDeleteRequests();
+
+OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+getOmRequest().getDeleteOpenKeysRequest();
+
+List submittedOpenKeyBucket =
+deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+.mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+IOException exception = null;
+OMClientResponse omClientResponse = null;
+Result result = null;
+Map deletedOpenKeys = new HashMap<>();
+
+try {
+  // Open keys are grouped by bucket, but there may be multiple buckets
+  // per volume. This maps volume name to volume args to track
+  // all volume updates for this request.
+  Map modifiedVolumes = new HashMap<>();
+  OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+  for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+// For each bucket where keys will be deleted from,
+// get its bucket lock and update the cache accordingly.
+Map deleted = updateOpenKeyTableCache(ozoneManager,
+trxnLogIndex, openKeyBucket);
+
+deletedOpenKeys.putAll(deleted);
+
+// If open keys were deleted from this bucket and its volume still
+// exists, update the volume's byte usage in the cache.
+if (!deleted.isEmpty()) {
+  String volumeName = openKeyBucket.getVolumeName();
+  // 

[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

2020-10-06 Thread GitBox


errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r500389769



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumOpenKeyDeleteRequests();
+
+OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+getOmRequest().getDeleteOpenKeysRequest();
+
+List submittedOpenKeyBucket =
+deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+.mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+IOException exception = null;
+OMClientResponse omClientResponse = null;
+Result result = null;
+Map deletedOpenKeys = new HashMap<>();
+
+try {
+  // Open keys are grouped by bucket, but there may be multiple buckets
+  // per volume. This maps volume name to volume args to track
+  // all volume updates for this request.
+  Map modifiedVolumes = new HashMap<>();
+  OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+  for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+// For each bucket where keys will be deleted from,
+// get its bucket lock and update the cache accordingly.
+Map deleted = updateOpenKeyTableCache(ozoneManager,
+trxnLogIndex, openKeyBucket);
+
+deletedOpenKeys.putAll(deleted);
+
+// If open keys were deleted from this bucket and its volume still
+// exists, update the volume's byte usage in the cache.
+if (!deleted.isEmpty()) {
+  String volumeName = openKeyBucket.getVolumeName();
+  // 

[jira] [Updated] (HDDS-4312) findbugs check succeeds despite compile error

2020-10-06 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4312:
---
Summary: findbugs check succeeds despite compile error  (was: findbugs 
check succeeds despite failure)

> findbugs check succeeds despite compile error
> -
>
> Key: HDDS-4312
> URL: https://issues.apache.org/jira/browse/HDDS-4312
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.1.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> Findbugs check has been silently failing but reporting success for some time 
> now.  The problem is that {{findbugs.sh}} determines exit code based on the 
> number of findbugs failures.  If {{compile}} step fails, exit code is 0, ie. 
> success.
> {code:title=https://github.com/apache/hadoop-ozone/runs/1210535433#step:3:866}
> 2020-10-02T18:37:57.0699502Z [ERROR] Failed to execute goal on project 
> hadoop-hdds-client: Could not resolve dependencies for project 
> org.apache.hadoop:hadoop-hdds-client:jar:1.1.0-SNAPSHOT: Could not find 
> artifact org.apache.hadoop:hadoop-hdds-common:jar:tests:1.1.0-SNAPSHOT in 
> apache.snapshots.https 
> (https://repository.apache.org/content/repositories/snapshots) -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

2020-10-06 Thread GitBox


errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r500389769



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumOpenKeyDeleteRequests();
+
+OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+getOmRequest().getDeleteOpenKeysRequest();
+
+List submittedOpenKeyBucket =
+deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+.mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+IOException exception = null;
+OMClientResponse omClientResponse = null;
+Result result = null;
+Map deletedOpenKeys = new HashMap<>();
+
+try {
+  // Open keys are grouped by bucket, but there may be multiple buckets
+  // per volume. This maps volume name to volume args to track
+  // all volume updates for this request.
+  Map modifiedVolumes = new HashMap<>();
+  OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+  for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+// For each bucket where keys will be deleted from,
+// get its bucket lock and update the cache accordingly.
+Map deleted = updateOpenKeyTableCache(ozoneManager,
+trxnLogIndex, openKeyBucket);
+
+deletedOpenKeys.putAll(deleted);
+
+// If open keys were deleted from this bucket and its volume still
+// exists, update the volume's byte usage in the cache.
+if (!deleted.isEmpty()) {
+  String volumeName = openKeyBucket.getVolumeName();
+  // 

[jira] [Updated] (HDDS-4312) findbugs check succeeds despite failure

2020-10-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4312:
-
Labels: pull-request-available  (was: )

> findbugs check succeeds despite failure
> ---
>
> Key: HDDS-4312
> URL: https://issues.apache.org/jira/browse/HDDS-4312
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.1.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> Findbugs check has been silently failing but reporting success for some time 
> now.  The problem is that {{findbugs.sh}} determines exit code based on the 
> number of findbugs failures.  If {{compile}} step fails, exit code is 0, ie. 
> success.
> {code:title=https://github.com/apache/hadoop-ozone/runs/1210535433#step:3:866}
> 2020-10-02T18:37:57.0699502Z [ERROR] Failed to execute goal on project 
> hadoop-hdds-client: Could not resolve dependencies for project 
> org.apache.hadoop:hadoop-hdds-client:jar:1.1.0-SNAPSHOT: Could not find 
> artifact org.apache.hadoop:hadoop-hdds-common:jar:tests:1.1.0-SNAPSHOT in 
> apache.snapshots.https 
> (https://repository.apache.org/content/repositories/snapshots) -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #1476: HDDS-4312. findbugs check succeeds despite failure

2020-10-06 Thread GitBox


adoroszlai opened a new pull request #1476:
URL: https://github.com/apache/hadoop-ozone/pull/1476


   ## What changes were proposed in this pull request?
   
   1. Make findbugs check correctly report compilation failure via exit code
   2. Compile test classes, required by `test-jar` dependency
   3. Exclude Protobuf generated classes in new 
`hadoop-ozone/interface-storage` module
   
   https://issues.apache.org/jira/browse/HDDS-4312
   
   ## How was this patch tested?
   
   1. Verified that findbugs check can [now 
fail](https://github.com/adoroszlai/hadoop-ozone/runs/1215266036#step:3:866) 
(without fixes 2 and 3)
   2. Verified that the check proceeds beyond compilation, and [finds actual 
findbugs 
violations](https://github.com/adoroszlai/hadoop-ozone/runs/1215409391#step:3:1808)
 (without fix 3)
   3. Verified that findbugs check 
[passes](https://github.com/adoroszlai/hadoop-ozone/runs/1215527172#step:3:1803)
 (with all 3 fixes)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3102) ozone getconf command should use the GenericCli parent class

2020-10-06 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208835#comment-17208835
 ] 

Arpit Agarwal commented on HDDS-3102:
-

This was an incompatible change [~elek].

> ozone getconf command should use the GenericCli parent class
> 
>
> Key: HDDS-3102
> URL: https://issues.apache.org/jira/browse/HDDS-3102
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Marton Elek
>Assignee: Rui Wang
>Priority: Major
>  Labels: incompatible, newbie, pull-request-available
> Fix For: 1.1.0
>
>
> org.apache.hadoop.ozone.freon.OzoneGetCOnf implements a tool to print out 
> current configuration values
> With all the other CLI tools we already started to use picocli and the 
> GenericCli parent class.
> To provide better user experience we should migrate the tool to use 
> GenericCli (+move it to the tools project + remove freon from the package 
> name)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3102) ozone getconf command should use the GenericCli parent class

2020-10-06 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-3102:

Labels: incompatible newbie pull-request-available  (was: newbie 
pull-request-available)

> ozone getconf command should use the GenericCli parent class
> 
>
> Key: HDDS-3102
> URL: https://issues.apache.org/jira/browse/HDDS-3102
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Marton Elek
>Assignee: Rui Wang
>Priority: Major
>  Labels: incompatible, newbie, pull-request-available
> Fix For: 1.1.0
>
>
> org.apache.hadoop.ozone.freon.OzoneGetCOnf implements a tool to print out 
> current configuration values
> With all the other CLI tools we already started to use picocli and the 
> GenericCli parent class.
> To provide better user experience we should migrate the tool to use 
> GenericCli (+move it to the tools project + remove freon from the package 
> name)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4310) Ozone getconf broke the compatibility

2020-10-06 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-4310:

Description: 
Currently ozone getconf '-confKey' does not work as 'HDDS-3102' removed the 
need of prepending - with options.
{code:java}
RUNNING: ozone getconf -confKey ozone.om.service.ids 2020-10-05 
19:10:09,110|INFO|MainThread|machine.py:180 - 
run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Unknown options: '-confKey', 
'ozone.om.service.ids' 2020-10-05 19:10:09,111|INFO|MainThread|machine.py:180 - 
run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Possible solutions: -conf
{code}
There are some users which did the automation with the commands and this change 
broke them.

  was:
Currently ozone getconf '-confKey' does not work as 'HDDS-3102'  removed the 
need of prepending  - with options.


{code:java}
RUNNING: /opt/cloudera/parcels/CDH/bin/ozone getconf -confKey 
ozone.om.service.ids 2020-10-05 19:10:09,110|INFO|MainThread|machine.py:180 - 
run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Unknown options: '-confKey', 
'ozone.om.service.ids' 2020-10-05 19:10:09,111|INFO|MainThread|machine.py:180 - 
run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Possible solutions: -conf
{code}

There are some users which did the automation with the commands and this change 
broke them.


> Ozone getconf broke the compatibility
> -
>
> Key: HDDS-4310
> URL: https://issues.apache.org/jira/browse/HDDS-4310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 1.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 1.1.0
>
>
> Currently ozone getconf '-confKey' does not work as 'HDDS-3102' removed the 
> need of prepending - with options.
> {code:java}
> RUNNING: ozone getconf -confKey ozone.om.service.ids 2020-10-05 
> 19:10:09,110|INFO|MainThread|machine.py:180 - 
> run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Unknown options: '-confKey', 
> 'ozone.om.service.ids' 2020-10-05 19:10:09,111|INFO|MainThread|machine.py:180 
> - run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Possible solutions: -conf
> {code}
> There are some users which did the automation with the commands and this 
> change broke them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1475: HDDS-4310: Ozone getconf broke the compatibility

2020-10-06 Thread GitBox


adoroszlai merged pull request #1475:
URL: https://github.com/apache/hadoop-ozone/pull/1475


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4311) Type-safe config design doc points to OM HA

2020-10-06 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-4311:
--

 Summary: Type-safe config design doc points to OM HA
 Key: HDDS-4311
 URL: https://issues.apache.org/jira/browse/HDDS-4311
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: documentation
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Abstract and links for 
http://hadoop.apache.org/ozone/docs/1.0.0/design/typesafeconfig.html are wrong, 
reference OM HA design doc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4312) findbugs check succeeds despite failure

2020-10-06 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-4312:
--

 Summary: findbugs check succeeds despite failure
 Key: HDDS-4312
 URL: https://issues.apache.org/jira/browse/HDDS-4312
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: build
Affects Versions: 1.1.0
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Findbugs check has been silently failing but reporting success for some time 
now.  The problem is that {{findbugs.sh}} determines exit code based on the 
number of findbugs failures.  If {{compile}} step fails, exit code is 0, ie. 
success.

{code:title=https://github.com/apache/hadoop-ozone/runs/1210535433#step:3:866}
2020-10-02T18:37:57.0699502Z [ERROR] Failed to execute goal on project 
hadoop-hdds-client: Could not resolve dependencies for project 
org.apache.hadoop:hadoop-hdds-client:jar:1.1.0-SNAPSHOT: Could not find 
artifact org.apache.hadoop:hadoop-hdds-common:jar:tests:1.1.0-SNAPSHOT in 
apache.snapshots.https 
(https://repository.apache.org/content/repositories/snapshots) -> [Help 1]
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang edited a comment on pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-10-06 Thread GitBox


runzhiwang edited a comment on pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#issuecomment-704242576


   @bshashikant Thanks for suggestions. Actually, RandomLeaderChoosePolicy does 
not choose datanode, it return null in 
[chooseLeader](https://github.com/apache/hadoop-ozone/blob/f573b4a8b45f4463f0bcb95971a45789bda91d5c/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/leader/choose/algorithms/RandomLeaderChoosePolicy.java#L40),
 then all the datanodes are assigned the [same 
priority](https://github.com/apache/hadoop-ozone/blob/f573b4a8b45f4463f0bcb95971a45789bda91d5c/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CreatePipelineCommand.java#L60),
 as what is currently.  The name of RandomLeaderChoosePolicy seems confused, 
sorry for the misleading, do you have better name?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang edited a comment on pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-10-06 Thread GitBox


runzhiwang edited a comment on pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#issuecomment-704242576


   @bshashikant Thanks for suggestions. Actually, RandomLeaderChoosePolicy does 
not choose datanode, it return null in 
[chooseLeader](https://github.com/apache/hadoop-ozone/pull/1371/files#diff-b180475d031c8ef84f96e7d623f11506R40),
 then all the datanodes are assigned the [same 
priority](https://github.com/apache/hadoop-ozone/blob/f573b4a8b45f4463f0bcb95971a45789bda91d5c/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CreatePipelineCommand.java#L60),
 as what is currently.  The name of RandomLeaderChoosePolicy seems confused, 
sorry for the misleading, do you have better name?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-10-06 Thread GitBox


runzhiwang commented on pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#issuecomment-704242576


   @bshashikant Thanks for suggestions. Actually, RandomLeaderChoosePolicy does 
not choose datanode, it return null in 
[chooseLeader](https://github.com/apache/hadoop-ozone/pull/1371/files#diff-b180475d031c8ef84f96e7d623f11506R40),
 then all the datanodes are assigned the [same 
priority](https://github.com/apache/hadoop-ozone/pull/1371/files#diff-a9f8deba9c990cb9b148008785c8cff2R60),
 as what is currently.  The name of RandomLeaderChoosePolicy seems confused, 
sorry for the misleading, do you have better name?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4310) Ozone getconf broke the compatibility

2020-10-06 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-4310.

Fix Version/s: 1.1.0
   Resolution: Fixed

> Ozone getconf broke the compatibility
> -
>
> Key: HDDS-4310
> URL: https://issues.apache.org/jira/browse/HDDS-4310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 1.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Currently ozone getconf '-confKey' does not work as 'HDDS-3102'  removed the 
> need of prepending  - with options.
> {code:java}
> RUNNING: /opt/cloudera/parcels/CDH/bin/ozone getconf -confKey 
> ozone.om.service.ids 2020-10-05 19:10:09,110|INFO|MainThread|machine.py:180 - 
> run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Unknown options: '-confKey', 
> 'ozone.om.service.ids' 2020-10-05 19:10:09,111|INFO|MainThread|machine.py:180 
> - run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Possible solutions: -conf
> {code}
> There are some users which did the automation with the commands and this 
> change broke them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4310) Ozone getconf broke the compatibility

2020-10-06 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4310:
---
Labels:   (was: pull-request-available)

> Ozone getconf broke the compatibility
> -
>
> Key: HDDS-4310
> URL: https://issues.apache.org/jira/browse/HDDS-4310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 1.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Fix For: 1.1.0
>
>
> Currently ozone getconf '-confKey' does not work as 'HDDS-3102'  removed the 
> need of prepending  - with options.
> {code:java}
> RUNNING: /opt/cloudera/parcels/CDH/bin/ozone getconf -confKey 
> ozone.om.service.ids 2020-10-05 19:10:09,110|INFO|MainThread|machine.py:180 - 
> run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Unknown options: '-confKey', 
> 'ozone.om.service.ids' 2020-10-05 19:10:09,111|INFO|MainThread|machine.py:180 
> - run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Possible solutions: -conf
> {code}
> There are some users which did the automation with the commands and this 
> change broke them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #1475: HDDS-4310: Ozone getconf broke the compatibility

2020-10-06 Thread GitBox


adoroszlai commented on a change in pull request #1475:
URL: https://github.com/apache/hadoop-ozone/pull/1475#discussion_r500240713



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/conf/PrintConfKeyCommandHandler.java
##
@@ -42,8 +43,9 @@ public Void call() throws Exception {
 String value = tool.getConf().getTrimmed(confKey);
 if (value != null) {
   tool.printOut(value);
+} else {
+  tool.printError("Configuration " + confKey + " is missing.");

Review comment:
   Nice find.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on pull request #1457: HDDS-4253. Add LayoutVersion request/response for DN registration.

2020-10-06 Thread GitBox


linyiqun commented on pull request #1457:
URL: https://github.com/apache/hadoop-ozone/pull/1457#issuecomment-704221050


   All checks have passed, +1.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-10-06 Thread GitBox


bshashikant commented on pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#issuecomment-704195949


   > @GlenGeng @bshashikant Thanks for review. I have updated the patch.
   > 
   > > Can we also make the policy configurable? Also, one policy should also 
be defined for no priority at all incase, this turns out to be a performance 
killer.
   > 
   > policy can be configured by `ozone.scm.pipeline.leader-choose.policy`. And 
define a policy for no priority named `RandomLeaderChoosePolicy`
   
   @runzhiwang , please correct me if i am wrong. RandomLeaderChoosePolicy 
still chooses a datanode randomly and this is suggested to Ratis while creating 
the pipeline.
   With NO_PRIORITY, i meant that, we should not have any recommendation for a 
leader at all (as what is currently). Usually in such cases, whoever starts the 
ratis leader election first initially, becomes the leader.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-10-06 Thread GitBox


runzhiwang commented on pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#issuecomment-704131953


   @GlenGeng @bshashikant  Thanks for review. I have updated the patch.
   
   `Can we also make the policy configurable? Also, one policy should also be 
defined for no priority at all incase, this turns out to be a performance 
killer.`
   
   policy can be configured by `ozone.scm.pipeline.leader-choose.policy`. And 
define a policy for no priority named `RandomLeaderChoosePolicy`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang edited a comment on pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-10-06 Thread GitBox


runzhiwang edited a comment on pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#issuecomment-704131953


   @GlenGeng @bshashikant  Thanks for review. I have updated the patch.
   
   > Can we also make the policy configurable? Also, one policy should also be 
defined for no priority at all incase, this turns out to be a performance 
killer.
   
   policy can be configured by `ozone.scm.pipeline.leader-choose.policy`. And 
define a policy for no priority named `RandomLeaderChoosePolicy`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel closed pull request #1474: Merge Master into the HDDS-1880-Decom branch

2020-10-06 Thread GitBox


sodonnel closed pull request #1474:
URL: https://github.com/apache/hadoop-ozone/pull/1474


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on pull request #1474: Merge Master into the HDDS-1880-Decom branch

2020-10-06 Thread GitBox


sodonnel commented on pull request #1474:
URL: https://github.com/apache/hadoop-ozone/pull/1474#issuecomment-704094468


   Merged from the CLI, so closing this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4310) Ozone getconf broke the compatibility

2020-10-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4310:
-
Labels: pull-request-available  (was: )

> Ozone getconf broke the compatibility
> -
>
> Key: HDDS-4310
> URL: https://issues.apache.org/jira/browse/HDDS-4310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 1.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>
> Currently ozone getconf '-confKey' does not work as 'HDDS-3102'  removed the 
> need of prepending  - with options.
> {code:java}
> RUNNING: /opt/cloudera/parcels/CDH/bin/ozone getconf -confKey 
> ozone.om.service.ids 2020-10-05 19:10:09,110|INFO|MainThread|machine.py:180 - 
> run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Unknown options: '-confKey', 
> 'ozone.om.service.ids' 2020-10-05 19:10:09,111|INFO|MainThread|machine.py:180 
> - run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Possible solutions: -conf
> {code}
> There are some users which did the automation with the commands and this 
> change broke them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] umamaheswararao opened a new pull request #1475: HDDS-4310: Ozone getconf broke the compatibility

2020-10-06 Thread GitBox


umamaheswararao opened a new pull request #1475:
URL: https://github.com/apache/hadoop-ozone/pull/1475


   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4310
   
   ## How was this patch tested?
   
   Added test cases.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on pull request #1474: Merge Master into the HDDS-1880-Decom branch

2020-10-06 Thread GitBox


sodonnel commented on pull request #1474:
URL: https://github.com/apache/hadoop-ozone/pull/1474#issuecomment-704081400


   All tests have passed on 3 runs so this looks good. I will merge it from the 
CLI so we don't lose the commit history, and then close this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng commented on pull request #1319: HDDS-4107: replace scmID with clusterID for container and volume at Datanode side

2020-10-06 Thread GitBox


GlenGeng commented on pull request #1319:
URL: https://github.com/apache/hadoop-ozone/pull/1319#issuecomment-704078470


   > /pending @GlenGeng What is the plan with this issue? Do you need help? How 
can I help to move it forward?
   
   Hey, @elek, sorry for the late reply!
   
   I've discussed with @nandakumar131, several weeks ago, we decided to abandon 
this solution, and will figure out a new one.
   Since currently this will not block our SCM HA test, we give it a low 
priority.
   
   I will close this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng closed pull request #1319: HDDS-4107: replace scmID with clusterID for container and volume at Datanode side

2020-10-06 Thread GitBox


GlenGeng closed pull request #1319:
URL: https://github.com/apache/hadoop-ozone/pull/1319


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org