[GitHub] [hadoop-ozone] timmylicheng merged pull request #1428: HDDS-4192: enable SCM Raft Group based on config ozone.scm.names

2020-10-09 Thread GitBox


timmylicheng merged pull request #1428:
URL: https://github.com/apache/hadoop-ozone/pull/1428


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on pull request #1428: HDDS-4192: enable SCM Raft Group based on config ozone.scm.names

2020-10-09 Thread GitBox


timmylicheng commented on pull request #1428:
URL: https://github.com/apache/hadoop-ozone/pull/1428#issuecomment-706478866


   LGTM. +1. Merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


rakeshadr commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502734677



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
##
@@ -453,4 +456,145 @@ public static void addDirectoryTableCacheEntries(
 }
   }
 
+  /**
+   * Adding Key info to the openFile Table cache.
+   *
+   * @param omMetadataManager OM Metadata Manager
+   * @param dbOpenFileNameopen file name key
+   * @param omFileInfokey info
+   * @param fileName  file name
+   * @param trxnLogIndex  transaction log index
+   */
+  public static void addOpenFileTableCacheEntry(
+  OMMetadataManager omMetadataManager, String dbOpenFileName,
+  @Nullable OmKeyInfo omFileInfo, String fileName, long trxnLogIndex) {
+
+Optional keyInfoOptional = Optional.absent();
+if (omFileInfo != null) {
+  // New key format for the openFileTable.
+  // For example, the user given key path is '/a/b/c/d/e/file1', then in DB
+  // keyName field stores only the leaf node name, which is 'file1'.
+  OmKeyInfo dbOmFileInfo = omFileInfo.copyObject();

Review comment:
   OK:-)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


rakeshadr commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502734533



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequestV1.java
##
@@ -0,0 +1,289 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponse;
+import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.*;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.*;
+
+/**
+ * Handles create file request layout version1.
+ */
+public class OMFileCreateRequestV1 extends OMFileCreateRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequestV1.class);
+  public OMFileCreateRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  @SuppressWarnings("methodlength")
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+Map auditMap = buildKeyArgsAuditMap(keyArgs);
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+String keyName = keyArgs.getKeyName();
+
+// if isRecursive is true, file would be created even if parent
+// directories does not exist.
+boolean isRecursive = createFileRequest.getIsRecursive();
+if (LOG.isDebugEnabled()) {
+  LOG.debug("File create for : " + volumeName + "/" + bucketName + "/"
+  + keyName + ":" + isRecursive);
+}
+
+// if isOverWrite is true, file would be over written.
+boolean isOverWrite = createFileRequest.getIsOverwrite();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumCreateFile();
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+boolean acquiredLock = false;
+
+OmVolumeArgs omVolumeArgs = null;
+OmBucketInfo omBucketInfo = null;
+final List locations = new ArrayList<>();
+List missingParentInfos;
+int numKeysCreated = 0;
+
+OMClientResponse omClientResponse = null;
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+IOException exception = null;
+Result result = null;
+try {
+  keyArgs = resolveBucketLink(ozoneManager, keyArgs, auditMap);
+  volumeName = keyArgs.getVolumeName();
+  bucketName = keyArgs.getBucketName();
+
+  if (keyName.length() == 0) {
+// Check if this is the root of the filesystem.
+throw new OMException("Can not write to directory: " + keyName,
+OMException.ResultCodes.NOT_A_FILE);
+  }
+
+  // check Acl
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
+  

[jira] [Created] (HDDS-4330) Bootstrap new OM node

2020-10-09 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-4330:


 Summary: Bootstrap new OM node
 Key: HDDS-4330
 URL: https://issues.apache.org/jira/browse/HDDS-4330
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


In a ratis enabled OM cluster, add support to bootstrap a new OM node and add 
it to OM ratis ring. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502699811



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequestV1.java
##
@@ -0,0 +1,289 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponse;
+import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.*;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.*;
+
+/**
+ * Handles create file request layout version1.
+ */
+public class OMFileCreateRequestV1 extends OMFileCreateRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequestV1.class);
+  public OMFileCreateRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  @SuppressWarnings("methodlength")
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+Map auditMap = buildKeyArgsAuditMap(keyArgs);
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+String keyName = keyArgs.getKeyName();
+
+// if isRecursive is true, file would be created even if parent
+// directories does not exist.
+boolean isRecursive = createFileRequest.getIsRecursive();
+if (LOG.isDebugEnabled()) {
+  LOG.debug("File create for : " + volumeName + "/" + bucketName + "/"
+  + keyName + ":" + isRecursive);
+}
+
+// if isOverWrite is true, file would be over written.
+boolean isOverWrite = createFileRequest.getIsOverwrite();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumCreateFile();
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+boolean acquiredLock = false;
+
+OmVolumeArgs omVolumeArgs = null;
+OmBucketInfo omBucketInfo = null;
+final List locations = new ArrayList<>();
+List missingParentInfos;
+int numKeysCreated = 0;
+
+OMClientResponse omClientResponse = null;
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+IOException exception = null;
+Result result = null;
+try {
+  keyArgs = resolveBucketLink(ozoneManager, keyArgs, auditMap);
+  volumeName = keyArgs.getVolumeName();
+  bucketName = keyArgs.getBucketName();
+
+  if (keyName.length() == 0) {
+// Check if this is the root of the filesystem.
+throw new OMException("Can not write to directory: " + keyName,
+OMException.ResultCodes.NOT_A_FILE);
+  }
+
+  // check Acl
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
+ 

[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502696947



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequestV1.java
##
@@ -0,0 +1,289 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponse;
+import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.*;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.*;
+
+/**
+ * Handles create file request layout version1.
+ */
+public class OMFileCreateRequestV1 extends OMFileCreateRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequestV1.class);
+  public OMFileCreateRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  @SuppressWarnings("methodlength")
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+Map auditMap = buildKeyArgsAuditMap(keyArgs);
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+String keyName = keyArgs.getKeyName();
+
+// if isRecursive is true, file would be created even if parent
+// directories does not exist.
+boolean isRecursive = createFileRequest.getIsRecursive();
+if (LOG.isDebugEnabled()) {
+  LOG.debug("File create for : " + volumeName + "/" + bucketName + "/"
+  + keyName + ":" + isRecursive);
+}
+
+// if isOverWrite is true, file would be over written.
+boolean isOverWrite = createFileRequest.getIsOverwrite();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumCreateFile();
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+boolean acquiredLock = false;
+
+OmVolumeArgs omVolumeArgs = null;
+OmBucketInfo omBucketInfo = null;
+final List locations = new ArrayList<>();
+List missingParentInfos;
+int numKeysCreated = 0;
+
+OMClientResponse omClientResponse = null;
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+IOException exception = null;
+Result result = null;
+try {
+  keyArgs = resolveBucketLink(ozoneManager, keyArgs, auditMap);
+  volumeName = keyArgs.getVolumeName();
+  bucketName = keyArgs.getBucketName();
+
+  if (keyName.length() == 0) {
+// Check if this is the root of the filesystem.
+throw new OMException("Can not write to directory: " + keyName,
+OMException.ResultCodes.NOT_A_FILE);
+  }
+
+  // check Acl
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
+ 

[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502697158



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
##
@@ -227,6 +247,9 @@ protected OmMetadataManagerImpl() {
 
   @Override
   public Table getOpenKeyTable() {
+if (enableFSPaths && OzoneManagerRatisUtils.isOmLayoutVersionV1()) {

Review comment:
   Fine with me.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502690355



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
##
@@ -453,4 +456,145 @@ public static void addDirectoryTableCacheEntries(
 }
   }
 
+  /**
+   * Adding Key info to the openFile Table cache.
+   *
+   * @param omMetadataManager OM Metadata Manager
+   * @param dbOpenFileNameopen file name key
+   * @param omFileInfokey info
+   * @param fileName  file name
+   * @param trxnLogIndex  transaction log index
+   */
+  public static void addOpenFileTableCacheEntry(
+  OMMetadataManager omMetadataManager, String dbOpenFileName,
+  @Nullable OmKeyInfo omFileInfo, String fileName, long trxnLogIndex) {
+
+Optional keyInfoOptional = Optional.absent();
+if (omFileInfo != null) {
+  // New key format for the openFileTable.
+  // For example, the user given key path is '/a/b/c/d/e/file1', then in DB
+  // keyName field stores only the leaf node name, which is 'file1'.
+  OmKeyInfo dbOmFileInfo = omFileInfo.copyObject();

Review comment:
   Looks for every key commit now we do 2 times copy once to get from the 
table and here.
   I understand the reason.
   
   Nothing needs to be done here, just mentioning the difference between the 
original request and V1.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1476: HDDS-4312. findbugs check succeeds despite compile error

2020-10-09 Thread GitBox


adoroszlai commented on pull request #1476:
URL: https://github.com/apache/hadoop-ozone/pull/1476#issuecomment-706409745


   Thanks @elek for reviewing and committing it, and @amaliujia for the review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1454: HDDS-4285. Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-10-09 Thread GitBox


adoroszlai commented on pull request #1454:
URL: https://github.com/apache/hadoop-ozone/pull/1454#issuecomment-706409317


   Thanks @xiaoyuyao for reviewing and committing it, @ChenSammi and @linyiqun 
for the review, and @elek for finding the issue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #1486: HDDS-4296. SCM changes to process Layout Info in heartbeat request/response

2020-10-09 Thread GitBox


avijayanhwx commented on pull request #1486:
URL: https://github.com/apache/hadoop-ozone/pull/1486#issuecomment-706377348


   cc @fapifta / @sodonnel Please review. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4329) Expose Ratis retry config cache in OM

2020-10-09 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4329:


 Summary: Expose Ratis retry config cache in OM
 Key: HDDS-4329
 URL: https://issues.apache.org/jira/browse/HDDS-4329
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is to expose config Ratis retry cache duration in OM, and also choose 
a sensible default value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4308) Fix issue with quota update

2020-10-09 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211269#comment-17211269
 ] 

Bharat Viswanadham edited comment on HDDS-4308 at 10/9/20, 6:47 PM:


{quote}I think the better solution here is copy of a new volumeArgs object in 
the Request before addResponseToDoubleBuffer. Of course, during the copy 
process, we need to lock the object volumeArgs in case other operations change 
it.
{quote}
This might not be complete i believe, If 2 threads acquire copy object and if 
they update outside lock we have issue again. I think the whole operation 
should be performed under volume lock. (As we update in-memory it should be 
quick) But i agree that it might have performance impact across buckets when 
key writes happen.

Question: With your tests how much perf impact has been observed?

cc [~arp] For any more thoughts on this issue.


was (Author: bharatviswa):
I think the better solution here is copy of a new volumeArgs object in the 
Request before addResponseToDoubleBuffer. Of course, during the copy process, 
we need to lock the object volumeArgs in case other operations change it.

This might not be complete i believe, If 2 threads acquire copy object and if 
they update outside lock we have issue again. I think the whole operation 
should be performed under volume lock. (As we update in-memory it should be 
quick) But i agree that it might have performance impact across buckets when 
key writes happen.

Question: With your tests how much perf impact has been observed?

cc [~arp] For any more thoughts on this issue.




> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4308) Fix issue with quota update

2020-10-09 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211269#comment-17211269
 ] 

Bharat Viswanadham commented on HDDS-4308:
--

I think the better solution here is copy of a new volumeArgs object in the 
Request before addResponseToDoubleBuffer. Of course, during the copy process, 
we need to lock the object volumeArgs in case other operations change it.

This might not be complete i believe, If 2 threads acquire copy object and if 
they update outside lock we have issue again. I think the whole operation 
should be performed under volume lock. (As we update in-memory it should be 
quick) But i agree that it might have performance impact across buckets when 
key writes happen.

Question: With your tests how much perf impact has been observed?

cc [~arp] For any more thoughts on this issue.




> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4296) SCM changes to process Layout Info in heartbeat request/response

2020-10-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4296:
-
Labels: pull-request-available  (was: )

> SCM changes to process Layout Info in heartbeat request/response
> 
>
> Key: HDDS-4296
> URL: https://issues.apache.org/jira/browse/HDDS-4296
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Prashant Pogde
>Assignee: Prashant Pogde
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] prashantpogde opened a new pull request #1486: HDDS-4296. SCM changes to process Layout Info in heartbeat request/response

2020-10-09 Thread GitBox


prashantpogde opened a new pull request #1486:
URL: https://github.com/apache/hadoop-ozone/pull/1486


   ## What changes were proposed in this pull request?
   
   SCM changes to process Layout Info in heartbeat request/response
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4296
   
   ## How was this patch tested?
   
   UT. I will fix any CI failure.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4308) Fix issue with quota update

2020-10-09 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211254#comment-17211254
 ] 

Bharat Viswanadham edited comment on HDDS-4308 at 10/9/20, 6:17 PM:


As mentioned in the scenario this can happen when double buffer flush is not 
completed and other transaction requests updating bytes used for the same key.
When you run with a load this can be seen, as previously we have observed 
ConcurrentModificatinException because of using the same cache object in the 
double buffer. HDDS-2322

But here we might not see an error, but here this can happen bytesUsed will be 
updated wrongly.





was (Author: bharatviswa):
As mentioned in the scenario this can happen when double buffer flush is not 
completed and other transaction requests updating bytes used for the same key.
When you run with a load this can be seen, as previously we have observed 
ConcurrentModificatinException because of using the same cache object in the 
double buffer. HDDS-2322

But here we might not see an error, but here this can happen bytesUsed will be 
updated wrongly.

Coming to the solution, I think we can use read lock and acquire volume Object 
using Table#get API and update bytes used and submit this object to double 
buffer. (In this way, we might not see volume lock contention, as we acquire 
write lock on volume during create/delete, so this might no affect performance.

Let me know your thoughts?



> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4308) Fix issue with quota update

2020-10-09 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211254#comment-17211254
 ] 

Bharat Viswanadham commented on HDDS-4308:
--

As mentioned in the scenario this can happen when double buffer flush is not 
completed and other transaction requests updating bytes used for the same key.
When you run with a load this can be seen, as previously we have observed 
ConcurrentModificatinException because of using the same cache object in the 
double buffer. HDDS-2322

But here we might not see an error, but here this can happen bytesUsed will be 
updated wrongly.

Coming to the solution, I think we can use read lock and acquire volume Object 
using Table#get API and update bytes used and submit this object to double 
buffer. (In this way, we might not see volume lock contention, as we acquire 
write lock on volume during create/delete, so this might no affect performance.

Let me know your thoughts?



> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4164) OM client request fails with "failed to commit as key is not found in OpenKey table"

2020-10-09 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211251#comment-17211251
 ] 

Bharat Viswanadham commented on HDDS-4164:
--

HDDS-4262 is the root cause for this issue, when leader changes, all pending 
requests are replied from the old leader with NOT LEADER and replied back. As 
previously, we used new clientID and callID, Ratis server is not able to 
distinguish retry of request, with the fix from HDDS-4262 I ran a freon test, I 
don't see now Key_NOT_FOUND.

[~ljain] Once after your confirmation, will close this bug.

> OM client request fails with "failed to commit as key is not found in OpenKey 
> table"
> 
>
> Key: HDDS-4164
> URL: https://issues.apache.org/jira/browse/HDDS-4164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM HA
>Reporter: Lokesh Jain
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> OM client request fails with "failed to commit as key is not found in OpenKey 
> table"
> {code:java}
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28868 $Proxy17.submitRequest over 
> nodeId=om3,nodeAddress=vc1330.halxg.cloudera.com:9862
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28870 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28869 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28871 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28872 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28866 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28867 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28874 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28875 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 ERROR freon.BaseFreonGenerator: Error on executing task 
> 14424
> KEY_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Failed to 
> commit key, as /vol1/bucket1/akjkdz4hoj/14424/104766512182520809entry is not 
> found in the OpenKey table
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:593)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.commitKey(OzoneManagerProtocolClientSideTranslatorPB.java:650)
> at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.commitKey(BlockOutputStreamEntryPool.java:306)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:514)
> at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.close(OzoneOutputStream.java:60)
> at 
> org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.lambda$createKey$0(OzoneClientKeyGenerator.java:118)
> at com.codahale.metrics.Timer.time(Timer.java:101)
> at 
> org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.createKey(OzoneClientKeyGenerator.java:113)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.tryNextTask(BaseFreonGenerator.java:178)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.taskLoop(BaseFreonGenerator.java:167)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.lambda$startTaskRunners$0(BaseFreonGenerator.java:150)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDDS-3728) Bucket space: check quotaUsageInBytes when write key

2020-10-09 Thread Rui Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211239#comment-17211239
 ] 

Rui Wang commented on HDDS-3728:


[~micahzhao] thank you! I send other space quota related PR soon after rebased 
against this one.

> Bucket space: check quotaUsageInBytes when write key
> 
>
> Key: HDDS-3728
> URL: https://issues.apache.org/jira/browse/HDDS-3728
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDDS-3728) Bucket space: check quotaUsageInBytes when write key

2020-10-09 Thread Rui Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Wang updated HDDS-3728:
---
Comment: was deleted

(was: [~micahzhao] thank you! I send other space quota related PR soon after 
rebased against this one.)

> Bucket space: check quotaUsageInBytes when write key
> 
>
> Key: HDDS-3728
> URL: https://issues.apache.org/jira/browse/HDDS-3728
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4262) Use ClientID and CallID from Rpc Client to detect retry requests

2020-10-09 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4262:
-
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Use ClientID and CallID from Rpc Client to detect retry requests
> 
>
> Key: HDDS-4262
> URL: https://issues.apache.org/jira/browse/HDDS-4262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Affects Versions: 1.0.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Use clientID and callID to uniquely identify the requests.
> This will help in case when the request is retried for write requests, when 
> the previous one is already processed, the previous result can be returned 
> from the cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #1436: HDDS-4262. Use ClientID and CallID from Rpc Client to detect retry requests

2020-10-09 Thread GitBox


bharatviswa504 commented on pull request #1436:
URL: https://github.com/apache/hadoop-ozone/pull/1436#issuecomment-706312696


   Thank You @hanishakoneru for the review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #1436: HDDS-4262. Use ClientID and CallID from Rpc Client to detect retry requests

2020-10-09 Thread GitBox


bharatviswa504 merged pull request #1436:
URL: https://github.com/apache/hadoop-ozone/pull/1436


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on pull request #1436: HDDS-4262. Use ClientID and CallID from Rpc Client to detect retry requests

2020-10-09 Thread GitBox


hanishakoneru commented on pull request #1436:
URL: https://github.com/apache/hadoop-ozone/pull/1436#issuecomment-706304462


   Thanks Bharat. LGTM. +1.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


rakeshadr commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502565993



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
##
@@ -227,6 +247,9 @@ protected OmMetadataManagerImpl() {
 
   @Override
   public Table getOpenKeyTable() {
+if (enableFSPaths && OzoneManagerRatisUtils.isOmLayoutVersionV1()) {

Review comment:
   Sure, I will add special handling in KeyCommit code while implementing 
KeyCreate request. Hope that make sense to you.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


rakeshadr commented on pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#issuecomment-706282123


   Thank you @linyiqun and @bharatviswa504  for the continuous help in reviews. 
Please let me know if any more comments. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502546088



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
##
@@ -227,6 +247,9 @@ protected OmMetadataManagerImpl() {
 
   @Override
   public Table getOpenKeyTable() {
+if (enableFSPaths && OzoneManagerRatisUtils.isOmLayoutVersionV1()) {

Review comment:
   Just to add one more point here
   this is FS API, so the usage of the flag is of no effect, as this is FS API 
(File create) not an Object Store API.
   
   But KeyCommitRequest is common for both FS/Object Store we need special 
handling over there.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502546088



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
##
@@ -227,6 +247,9 @@ protected OmMetadataManagerImpl() {
 
   @Override
   public Table getOpenKeyTable() {
+if (enableFSPaths && OzoneManagerRatisUtils.isOmLayoutVersionV1()) {

Review comment:
   **Just to add one more point here**
   this is FS API, so the usage of the flag is of no effect, as this is FS API 
(File create) not an Object Store API.
   
   But KeyCommitRequest is common for both FS/Object Store we need special 
handling over there.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on pull request #1480: HDDS-4315. Use Epoch to generate unique ObjectIDs

2020-10-09 Thread GitBox


hanishakoneru commented on pull request #1480:
URL: https://github.com/apache/hadoop-ozone/pull/1480#issuecomment-706275207


   Thank you @linyiqun and @prashantpogde for the reviews. 
   
   Agree that setting aside 16 bits for epoch doesn't work for both the epoch 
as well as the transaction ids. 16 bits would not be enough to cover restarts 
and 40 bits might not be enough for transaction ids. 
   The new proposal is to have only 2 bits set aside for epoch. For non-Ratis 
OM, the transactionIndex will be saved in DB with every sync operation. When OM 
is restarted, this transactionIndex will be read from DB so that new 
transactions do not have clashing indices.
   The epoch would let us distinguish objects created before and after this 
upgrade. This would help if someone needs to fix the duplicate objectIDs in 
existing clusters. 
   
   Thank you @bharatviswa504 and @prashantpogde for the offline discussion.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


rakeshadr commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502525148



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
##
@@ -227,6 +247,9 @@ protected OmMetadataManagerImpl() {
 
   @Override
   public Table getOpenKeyTable() {
+if (enableFSPaths && OzoneManagerRatisUtils.isOmLayoutVersionV1()) {

Review comment:
   Thanks @linyiqun for the comment. It seems the above comment was not 
clear. I had done few corrections to the above comment. 
   
   Yes, V1 represents new key format. 
   
   Adding background about  `ozone.om.enable.filesystem.paths` -> this is the 
config to enable/disable enableFSPaths feature. Basically here the idea is to 
provide s3/fs inter-op. Please refer jira 
https://issues.apache.org/jira/browse/HDDS-4097 for more details. If the flag 
is enabled, then the user given key will be normalized and stored in FS 
semantics format by OM and it will be 100% FS semantics. If it is false, the 
key won't be normalized and it will be 100% S3 semantics.
   
   For example, user created a key "/dir1/dir2/dir3/file1" from S3 API. Now,  
if the flag is enabled the key will be normalized and create intermediate 
directories for the file1.
   
   _**More Details:-**_
   The cases I mentioned above - **V1 feature version & enableFSPaths=true** is 
100% FS semantics and **V1 feature version & enableFSPaths=false** is 100% S3 
semantics
   
   Assume the key is /dir1/dir2/dir3/file-1. Again assume V1 feature version 
enabled and bucketId is 512.
   Now, 
   **enableFSPaths=true**, which is 100% FS semantics.
   It stores as "512/dir1:1025", "1025/dir2:1026" and "1026/dir3:1027" into 
dirTable and "1027:file1" into openFiletable and on close move it to fileTable
   
   **enableFSPaths=false**, which is 100% S3 semantics.
   It stores as "512/dir1/dir2/dir3/file1:1025" into openFileTable and on close 
move it to fileTable. Here still maintains the parentID/Key format, but the key 
will be the fullpath and not a normalized path. Here the key can be anything 
like `/dir1dir2dir3///file1`.
   
   Please let me know if any more details needed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #1468: HDDS-4301. SCM CA certificate does not encode KeyUsage extension properly

2020-10-09 Thread GitBox


xiaoyuyao commented on pull request #1468:
URL: https://github.com/apache/hadoop-ozone/pull/1468#issuecomment-706258012


   bq. Is it a backward compatible change? If I understood well both the old 
method and new method is good enough for validation, but the new version is 
more standard.
   
   Yes. The new version is the standard way to encode the keyUsage. 
   
   bq. There is a new DEROctetString(keyUsage) call in SelfSignedCertificate 
which is very similar to the one which is replaced in CertificateSigningRequest.
There are 4 "new DEROctetString" but only two of them are for keyUsage. 
This PR is to fix the encoding of the keyUsage. I don't see find issue of using 
DEROctetString for for constraints and subject alternate names extension. 
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


rakeshadr commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502350594



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
##
@@ -227,6 +247,9 @@ protected OmMetadataManagerImpl() {
 
   @Override
   public Table getOpenKeyTable() {
+if (enableFSPaths && OzoneManagerRatisUtils.isOmLayoutVersionV1()) {

Review comment:
   Good catch @bharatviswa504. Please feel free to add if anything else 
needed. Thanks again!
   
   Based on our offline discussions, below is the expected behavior for diff 
requests:
   
   **V1 feature version** : Following ops shouldn't depend on enableFSPaths flag
   1) FileCreate  -> Look into dirTable for parents. Then create entries in 
openFileTable and on close add it to fileTable.
   2) DirCreate  -> Create entries in dirTable
   3) File/DirDelete -> Look into fileTable and dirTable for the keys.
   4) File/DirRename-> Look into fileTable and dirTable for the keys.
   
   **V1 feature version & enableFSPaths=true**
   1) KeyCreate ---> Look into dirTable for parents. Create entries in 
openFileTable and on close add it to fileTable.
   2) KeyDelete ---> Look into fileTable and dirTable for the keys.
   3) KeyRename -> supported only in ozone shell. It should look into fileTable 
and dirTable for the keys.
   
   **V1 feature version & enableFSPaths=false**
   1) KeyCreate ---> Create entries in openFileTable and on close add it to 
fileTable, but the parentId is the bucketId and the key "dir1/dir2/dir3/file1" 
will be stored into fileTable like "512/dir1/dir2/dir3/file1". Assume bucketId 
is 512.
   2) KeyDelete ---> Look into fileTable for the keys.
   3) KeyRename -> supported only in ozone shell. It should look into fileTable 
for the keys.
   
   In this PR, will handle only `FileCreate` request and not provided checks 
for enableFSPaths in KeyCommit. Will do this changes in latest commit.
   
   Later, I will raise subsequent jiras for handling KeyCreate/KeyCommit and 
other ops.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4285) Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-10-09 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-4285:
-
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()
> --
>
> Key: HDDS-4285
> URL: https://issues.apache.org/jira/browse/HDDS-4285
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
> Attachments: image-2020-09-28-16-19-17-581.png, 
> profile-20200928-161631-180518.svg
>
>
> Ozone read operation turned out to be slow mainly because we do a new 
> UGI.getCurrentUser for block token for each of the calls.
> We need to cache the block token / UGI.getCurrentUserCall() to make it faster.
>  !image-2020-09-28-16-19-17-581.png! 
> To reproduce:
> Checkout: https://github.com/elek/hadoop-ozone/tree/mocked-read
> {code}
> cd hadoop-ozone/client
> export 
> MAVEN_OPTS=-agentpath:/home/elek/prog/async-profiler/build/libasyncProfiler.so=start,file=/tmp/profile-%t-%p.svg
> mvn compile exec:java 
> -Dexec.mainClass=org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit 
> -Dexec.classpathScope=test
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4285) Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-10-09 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211083#comment-17211083
 ] 

Xiaoyu Yao commented on HDDS-4285:
--

Thanks [~adoroszlai] for the contribution and all for the reviews. The PR has 
been merged.

> Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()
> --
>
> Key: HDDS-4285
> URL: https://issues.apache.org/jira/browse/HDDS-4285
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
> Attachments: image-2020-09-28-16-19-17-581.png, 
> profile-20200928-161631-180518.svg
>
>
> Ozone read operation turned out to be slow mainly because we do a new 
> UGI.getCurrentUser for block token for each of the calls.
> We need to cache the block token / UGI.getCurrentUserCall() to make it faster.
>  !image-2020-09-28-16-19-17-581.png! 
> To reproduce:
> Checkout: https://github.com/elek/hadoop-ozone/tree/mocked-read
> {code}
> cd hadoop-ozone/client
> export 
> MAVEN_OPTS=-agentpath:/home/elek/prog/async-profiler/build/libasyncProfiler.so=start,file=/tmp/profile-%t-%p.svg
> mvn compile exec:java 
> -Dexec.mainClass=org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit 
> -Dexec.classpathScope=test
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #1454: HDDS-4285. Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-10-09 Thread GitBox


xiaoyuyao commented on pull request #1454:
URL: https://github.com/apache/hadoop-ozone/pull/1454#issuecomment-706248107


   Thanks @adoroszlai for the update. LGTM, +1.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #1454: HDDS-4285. Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-10-09 Thread GitBox


xiaoyuyao merged pull request #1454:
URL: https://github.com/apache/hadoop-ozone/pull/1454


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


linyiqun commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502393095



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
##
@@ -227,6 +247,9 @@ protected OmMetadataManagerImpl() {
 
   @Override
   public Table getOpenKeyTable() {
+if (enableFSPaths && OzoneManagerRatisUtils.isOmLayoutVersionV1()) {

Review comment:
   @rakeshadr , the V1 feature is a new key format and is not compatible 
with old format. enableFSPaths flag 
is not behaves as a switch of this feature here? If not, what is 
enableFSPaths used for? Maybe I am missing something, : ).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #1484: HDDS-4322. Add integration tests for Decommission and resolve issues detected by the tests.

2020-10-09 Thread GitBox


sodonnel commented on a change in pull request #1484:
URL: https://github.com/apache/hadoop-ozone/pull/1484#discussion_r502444281



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitorImpl.java
##
@@ -294,6 +294,9 @@ private boolean 
checkContainersReplicatedOnNode(DatanodeDetails dn)
 "in containerManager", cid, dn);
   }
 }
+LOG.info("{} has {} sufficientlyReplicated, {} underReplicated and {} " +

Review comment:
   That message would be printed for each decommissioned or maintenance 
node for each run of the replication monitor. The default now is 30 seconds, 
but that is probably too short, and we will need to change that default to 
something longer, maybe a minute or two.
   
   I think it is useful to have this information in the logs from a 
supportability perspective. It lets us see the progress of decommission, 
whether a node appears to be stuck, etc, and a few messages per minute only 
when nodes are decommissioning is not excessively noisy. 

##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
##
@@ -476,6 +476,9 @@ public Builder setDatanodeDetails(DatanodeDetails details) {
   this.setupTime = details.getSetupTime();
   this.revision = details.getRevision();
   this.buildDate = details.getBuildDate();
+  this.persistedOpState = details.getPersistedOpState();
+  this.persistedOpStateExpiryEpochSec =
+  details.getPersistedOpStateExpiryEpochSec();
   return this;

Review comment:
   These fields had been missed from the DatanodeDetails builder object, so 
when the DN reported back its "persisted state" the DN was always IN_SERVICE. 
Adding this change fixed that problem and allowed the state to be returned 
correctly.

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
##
@@ -20,27 +20,49 @@
 
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
 import org.apache.hadoop.hdds.server.events.EventHandler;
 import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Handles New Node event.
  */
 public class NewNodeHandler implements EventHandler {
 
   private final PipelineManager pipelineManager;
+  private final NodeDecommissionManager decommissionManager;
   private final ConfigurationSource conf;
+  private static final Logger LOG =
+  LoggerFactory.getLogger(NewNodeHandler.class);
 
   public NewNodeHandler(PipelineManager pipelineManager,
+  NodeDecommissionManager decommissionManager,
   ConfigurationSource conf) {
 this.pipelineManager = pipelineManager;
+this.decommissionManager = decommissionManager;
 this.conf = conf;
   }
 
   @Override
   public void onMessage(DatanodeDetails datanodeDetails,
   EventPublisher publisher) {
 pipelineManager.triggerPipelineCreation();
+HddsProtos.NodeOperationalState opState
+= datanodeDetails.getPersistedOpState();
+if (datanodeDetails.getPersistedOpState()
+!= HddsProtos.NodeOperationalState.IN_SERVICE) {
+  try {
+decommissionManager.continueAdminForNode(datanodeDetails);
+  } catch (NodeNotFoundException e) {
+// Should not happen, as the node has just registered to call this 
event
+// handler.
+LOG.warn("NodeNotFound when adding the node to the 
decommissionManager",
+e);
+  }
+}
   }

Review comment:
   If a DN is DECOMMISSIONING, and then SCM is restarted, SCM will forget 
all about the decommission nodes. Then the nodes will re-register with SCM and 
report they are DECOMMISSIONING. If the node is DECOMMISSIONING rather than 
DECOMMISSIONED, we need to get it back into the decommission workflow. This 
NewNodeHandler is invoked for a new registration of a Node, so this change 
continues the decom process.

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeDecommissionManager.java
##
@@ -233,6 +233,22 @@ public synchronized void decommissionNodes(List nodes)
 }
   }
 
+  /**
+   * If a SCM is restarted, then upon re-registration the datanode will already
+   * be in DECOMMISSIONING or ENTERING_MAINTENANCE state. In that case, it
+   * needs to be added back into the monitor to track its progress.
+   * @param dn Datanode to add back to tracking.
+   * @throws NodeNotFoundException
+   */
+  public synchronized void continueAdminForNode(DatanodeDetails dn)
+  throws NodeNotFoundException {
+NodeOperationalState opState = getNodeStatus(dn).getOperationalState();
+if (opState == 

[GitHub] [hadoop-ozone] sodonnel commented on pull request #1484: HDDS-4322. Add integration tests for Decommission and resolve issues detected by the tests.

2020-10-09 Thread GitBox


sodonnel commented on pull request #1484:
URL: https://github.com/apache/hadoop-ozone/pull/1484#issuecomment-706203898


   Thanks for taking a look @elek 
   
   > Can you please share more details. It would help me to understand the 
changes.
   
   I added some comments inline with each change that explains the reasoning 
behind them. There were two main problems:
   
   1) The state reported by the DN was not reflected in DatanodeDetails seen in 
SCM. This was due to missing fields in the builder object and resulted in 
decommission not being able to survive a SCM restart.
   
   2) After an SCM restart when a node reported as DECOMMISSIONING or 
ENTERING_MAINTENANCE, it failed to make any further progress and was stuck in 
that state. It needed to be put back into the DatanodeAdminMonitor to finish 
out the process.
   
   3) There was a small race condition, which may not have caused any problems 
but was better to fix, in the node status getting set and then updated on 
registration.
   
   Each of the problems was discovered via one of the new tests and then 
corrected to get the test to pass.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek merged pull request #1476: HDDS-4312. findbugs check succeeds despite compile error

2020-10-09 Thread GitBox


elek merged pull request #1476:
URL: https://github.com/apache/hadoop-ozone/pull/1476


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1481: HDDS-4316. Upgrade to angular 1.8.0 due to CVE-2020-7676

2020-10-09 Thread GitBox


adoroszlai merged pull request #1481:
URL: https://github.com/apache/hadoop-ozone/pull/1481


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1483: HDDS-4319. Compile error with Java 11

2020-10-09 Thread GitBox


adoroszlai commented on pull request #1483:
URL: https://github.com/apache/hadoop-ozone/pull/1483#issuecomment-705355508


   Thanks @avijayanhwx for reviewing and committing it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


rakeshadr commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r501645890



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequestV1.java
##
@@ -0,0 +1,289 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponse;
+import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.*;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.*;
+
+/**
+ * Handles create file request layout version1.
+ */
+public class OMFileCreateRequestV1 extends OMFileCreateRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequestV1.class);
+  public OMFileCreateRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  @SuppressWarnings("methodlength")
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+Map auditMap = buildKeyArgsAuditMap(keyArgs);
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+String keyName = keyArgs.getKeyName();
+
+// if isRecursive is true, file would be created even if parent
+// directories does not exist.
+boolean isRecursive = createFileRequest.getIsRecursive();
+if (LOG.isDebugEnabled()) {
+  LOG.debug("File create for : " + volumeName + "/" + bucketName + "/"
+  + keyName + ":" + isRecursive);
+}
+
+// if isOverWrite is true, file would be over written.
+boolean isOverWrite = createFileRequest.getIsOverwrite();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumCreateFile();
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+boolean acquiredLock = false;
+
+OmVolumeArgs omVolumeArgs = null;
+OmBucketInfo omBucketInfo = null;
+final List locations = new ArrayList<>();
+List missingParentInfos;
+int numKeysCreated = 0;
+
+OMClientResponse omClientResponse = null;
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+IOException exception = null;
+Result result = null;
+try {
+  keyArgs = resolveBucketLink(ozoneManager, keyArgs, auditMap);
+  volumeName = keyArgs.getVolumeName();
+  bucketName = keyArgs.getBucketName();
+
+  if (keyName.length() == 0) {
+// Check if this is the root of the filesystem.
+throw new OMException("Can not write to directory: " + keyName,
+OMException.ResultCodes.NOT_A_FILE);
+  }
+
+  // check Acl
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
+  

[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


linyiqun commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r501717309



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequestV1.java
##
@@ -0,0 +1,275 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmDirectoryInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OzoneFSUtils;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCommitResponseV1;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.CommitKeyRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.KeyArgs;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.KeyLocation;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.KEY_NOT_FOUND;
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.NOT_A_FILE;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles CommitKey request layout version V1.
+ */
+public class OMKeyCommitRequestV1 extends OMKeyCommitRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMKeyCommitRequestV1.class);
+
+  public OMKeyCommitRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  @SuppressWarnings("methodlength")
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CommitKeyRequest commitKeyRequest = getOmRequest().getCommitKeyRequest();
+
+KeyArgs commitKeyArgs = commitKeyRequest.getKeyArgs();
+
+String volumeName = commitKeyArgs.getVolumeName();
+String bucketName = commitKeyArgs.getBucketName();
+String keyName = commitKeyArgs.getKeyName();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumKeyCommits();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+
+Map auditMap = buildKeyArgsAuditMap(commitKeyArgs);
+
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+
+IOException exception = null;
+OmKeyInfo omKeyInfo = null;
+OmVolumeArgs omVolumeArgs = null;
+OmBucketInfo omBucketInfo = null;
+OMClientResponse omClientResponse = null;
+boolean bucketLockAcquired = false;
+Result result;
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+try {
+  commitKeyArgs = resolveBucketLink(ozoneManager, commitKeyArgs, auditMap);
+  volumeName = commitKeyArgs.getVolumeName();
+  bucketName = commitKeyArgs.getBucketName();
+
+  // check Acl
+  

[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

2020-10-09 Thread GitBox


errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r501976342



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Base class for responses that need to move keys from an arbitrary table to
+ * the deleted table.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+  @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+super(omResponse);
+this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+super(omResponse);
+checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void addDeletionToBatch(
+  OMMetadataManager omMetadataManager,
+  BatchOperation batchOperation,
+  Table fromTable,
+  String keyName,
+  OmKeyInfo omKeyInfo) throws IOException {
+
+addDeletionToBatch(omMetadataManager, batchOperation, fromTable, keyName,
+omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void addDeletionToBatch(

Review comment:
   This is actually related to a mistake I made in OMKeysDeleteResponse. 
The original implementation had one trxnLogIndex it used for all the keys. All 
other calls to this method are using the updateID of the keyInfo provided as 
the trxnLogIndex. If the way I am doing it currently (OMKeysDeleteResponse uses 
the updateID of each key as its trxnLogIndex instead of one value for all keys 
deleted), then I can remove the overload. If not, I can fix 
OMKeysDeleteResponse to call the overload, giving it identical behavior to its 
original implementation.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * 

[GitHub] [hadoop-ozone] ChenSammi merged pull request #1458: HDDS-3728. Bucket space: check quotaUsageInBytes when write key and allocate block.

2020-10-09 Thread GitBox


ChenSammi merged pull request #1458:
URL: https://github.com/apache/hadoop-ozone/pull/1458


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong commented on a change in pull request #1083: HDDS-3814. Drop a column family through debug cli tool

2020-10-09 Thread GitBox


maobaolong commented on a change in pull request #1083:
URL: https://github.com/apache/hadoop-ozone/pull/1083#discussion_r502165402



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/DropTable.java
##
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.debug;
+
+import org.apache.hadoop.hdds.cli.SubcommandWithParent;
+import org.rocksdb.ColumnFamilyDescriptor;
+import org.rocksdb.ColumnFamilyHandle;
+import org.rocksdb.RocksDB;
+import picocli.CommandLine;
+
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+/**
+ * Drop a column Family/Table in db.
+ */
+@CommandLine.Command(
+name = "drop_column_family",
+description = "drop column family in db."
+)
+public class DropTable implements Callable, SubcommandWithParent {
+
+  @CommandLine.Option(names = {"--column_family"},
+  description = "Table name")
+  private String tableName;
+
+  @CommandLine.ParentCommand
+  private RDBParser parent;
+
+  @Override
+  public Void call() throws Exception {
+List cfs =
+RocksDBUtils.getColumnFamilyDescriptors(parent.getDbPath());
+final List columnFamilyHandleList =
+new ArrayList<>();
+try (RocksDB rocksDB = RocksDB.open(
+parent.getDbPath(), cfs, columnFamilyHandleList)) {
+  byte[] nameBytes = tableName.getBytes(StandardCharsets.UTF_8);
+  ColumnFamilyHandle toBeDeletedCf = null;
+  for (ColumnFamilyHandle cf : columnFamilyHandleList) {
+if (Arrays.equals(cf.getName(), nameBytes)) {
+  toBeDeletedCf = cf;
+  break;
+}
+  }
+  rocksDB.dropColumnFamily(toBeDeletedCf);

Review comment:
   Done. Thanks for the suggestion.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] codecov-io edited a comment on pull request #1428: HDDS-4192: enable SCM Raft Group based on config ozone.scm.names

2020-10-09 Thread GitBox


codecov-io edited a comment on pull request #1428:
URL: https://github.com/apache/hadoop-ozone/pull/1428#issuecomment-706121048


   # 
[Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=h1) 
Report
   > Merging 
[#1428](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=desc) 
into 
[HDDS-2823](https://codecov.io/gh/apache/hadoop-ozone/commit/40127b3c2402a0cd279eded94764898a52a74c60?el=desc)
 will **decrease** coverage by `0.76%`.
   > The diff coverage is `71.35%`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/graphs/tree.svg?width=650=150=pr=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=tree)
   
   ```diff
   @@   Coverage Diff   @@
   ## HDDS-2823#1428  +/-   ##
   ===
   - Coverage73.36%   72.59%   -0.77% 
   - Complexity   1016610464 +298 
   ===
 Files  994 1030  +36 
 Lines5067652574+1898 
 Branches  4887 5008 +121 
   ===
   + Hits 3717738167 +990 
   - Misses   1115312005 +852 
   - Partials  2346 2402  +56 
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=tree) | 
Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | 
[...s/ratis/retrypolicy/RetryLimitedPolicyCreator.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9yYXRpcy9yZXRyeXBvbGljeS9SZXRyeUxpbWl0ZWRQb2xpY3lDcmVhdG9yLmphdmE=)
 | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | 
[.../java/org/apache/hadoop/ozone/OzoneConfigKeys.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25maWdLZXlzLmphdmE=)
 | `100.00% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | 
[...main/java/org/apache/hadoop/ozone/OzoneConsts.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25zdHMuamF2YQ==)
 | `85.71% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | 
[...iner/common/transport/server/XceiverServerSpi.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3RyYW5zcG9ydC9zZXJ2ZXIvWGNlaXZlclNlcnZlclNwaS5qYXZh)
 | `0.00% <0.00%> (ø)` | `0.00 <0.00> (ø)` | |
   | 
[.../transport/server/ratis/ContainerStateMachine.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3RyYW5zcG9ydC9zZXJ2ZXIvcmF0aXMvQ29udGFpbmVyU3RhdGVNYWNoaW5lLmphdmE=)
 | `70.85% <ø> (-6.06%)` | `61.00 <0.00> (-6.00)` | |
   | 
[...ache/hadoop/hdds/conf/DatanodeRatisGrpcConfig.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvZnJhbWV3b3JrL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9jb25mL0RhdGFub2RlUmF0aXNHcnBjQ29uZmlnLmphdmE=)
 | `0.00% <ø> (ø)` | `0.00 <0.00> (ø)` | |
   | 
[...apache/hadoop/hdds/scm/block/BlockManagerImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2Jsb2NrL0Jsb2NrTWFuYWdlckltcGwuamF2YQ==)
 | `74.77% <ø> (+0.90%)` | `20.00 <0.00> (+1.00)` | |
   | 
[...hdds/scm/container/CloseContainerEventHandler.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9DbG9zZUNvbnRhaW5lckV2ZW50SGFuZGxlci5qYXZh)
 | `89.65% <ø> (ø)` | `6.00 <0.00> (ø)` | |
   | 
[...java/org/apache/hadoop/hdds/scm/ha/SCMHAUtils.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2hhL1NDTUhBVXRpbHMuamF2YQ==)
 | `11.11% <ø> (-35.05%)` | `2.00 <0.00> (-4.00)` | |
   | 
[...che/hadoop/hdds/scm/metadata/ContainerIDCodec.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL21ldGFkYXRhL0NvbnRhaW5lcklEQ29kZWMuamF2YQ==)
 | `60.00% <0.00%> (ø)` | `2.00 <0.00> (ø)` | |
   | ... and [272 
more](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree-more)
 | |
   
   --
   
   [Continue to review full report at 
Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=continue).
   > **Legend** - [Click here to learn 

[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1458: HDDS-3728. Bucket space: check quotaUsageInBytes when write key and allocate block.

2020-10-09 Thread GitBox


ChenSammi commented on pull request #1458:
URL: https://github.com/apache/hadoop-ozone/pull/1458#issuecomment-706132636


   LGTM +1.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek merged pull request #1477: HDDS-4311. Type-safe config design doc points to OM HA

2020-10-09 Thread GitBox


elek merged pull request #1477:
URL: https://github.com/apache/hadoop-ozone/pull/1477


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] prashantpogde commented on pull request #1480: HDDS-4315. Use Epoch to generate unique ObjectIDs

2020-10-09 Thread GitBox


prashantpogde commented on pull request #1480:
URL: https://github.com/apache/hadoop-ozone/pull/1480#issuecomment-705775239


   General comment on using the epoch id that increments with every OM restart. 
This can get tricky.
If OM goes in crash restart loop then we have just 2^16 increments 
available which is 65K attempts. if it takes 1 secs for OM to comeback online 
we have 65 K secs worth epoch number or 20 hours of crash looping. This is very 
pessimistic view, it may take several seconds for OM to restart but it does 
show how
   - 16 bit space can be insufficient for this scheme.
   - epoch need not be dependent on restart based increment. if it increments 
based on both of the following conditions
   A) OM restart +
   B) some object gets created after epoch id is incremented
  then epoch may last longer. But even then 16 bit looks insufficient. What 
 if OM creates one object and restarts.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1454: HDDS-4285. Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-10-09 Thread GitBox


ChenSammi commented on pull request #1454:
URL: https://github.com/apache/hadoop-ozone/pull/1454#issuecomment-705945770


   @adoroszlai , thanks for the explanation.  +1. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #1083: HDDS-3814. Drop a column family through debug cli tool

2020-10-09 Thread GitBox


elek commented on pull request #1083:
URL: https://github.com/apache/hadoop-ozone/pull/1083#issuecomment-706156646


   Merging it now. Thanks the review @bharatviswa504 and @avijayanhwx and the 
patch @maobaolong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on a change in pull request #1451: HDDS-4117. Normalize Keypath for listKeys.

2020-10-09 Thread GitBox


elek commented on a change in pull request #1451:
URL: https://github.com/apache/hadoop-ozone/pull/1451#discussion_r502352349



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -919,12 +920,32 @@ private boolean isKeyEmpty(OmKeyInfo keyInfo) {
 // underlying table using an iterator. That automatically creates a
 // snapshot of the data, so we don't need these locks at a higher level
 // when we iterate.
+
+startKey = normalizeListKeyPath(startKey);
+keyPrefix = normalizeListKeyPath(keyPrefix);

Review comment:
   Sorry, I don't get it. Why do we need to check it _inside 
normalizeListKeyPath_.
   
   As I wrote it's a very minor thing, but it seems to be better to move out 
the check from the method because it improves the readability (IMHO!).
   
   When somebody read the `listKeys` method it suggest the the keys are 
normalized. but in fact it's normalized only if enableFileSystemPaths. This can 
be confusing as the method name is `normalizeListKeyPath` and not something 
like `normalizeListKeyPathIfNormalizationIsEnabled`.
   
   I suggested moving out this condition from this method to improve the 
readibility, but if you think it's a bad idea, it can be ignored.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
##
@@ -298,21 +296,11 @@ public static String validateAndNormalizeKey(boolean 
enableFileSystemPaths,
 }
   }
 
-  @SuppressFBWarnings("DMI_HARDCODED_ABSOLUTE_FILENAME")
+
   public static String validateAndNormalizeKey(String keyName)
   throws OMException {
-String normalizedKeyName;
-if (keyName.startsWith(OM_KEY_PREFIX)) {
-  normalizedKeyName = Paths.get(keyName).toUri().normalize().getPath();
-} else {
-  normalizedKeyName = Paths.get(OM_KEY_PREFIX, keyName).toUri()
-  .normalize().getPath();
-}
-if (!keyName.equals(normalizedKeyName)) {
-  LOG.debug("Normalized key {} to {} ", keyName,
-  normalizedKeyName.substring(1));
-}
-return isValidKeyPath(normalizedKeyName.substring(1));
+String normalizedKeyName = OmUtils.normalizeKey(keyName);
+return isValidKeyPath(normalizedKeyName);

Review comment:
   Got it, thanks a lot. These are the cases where the normalization will 
result an invalid path due to the too many `..` (for example)

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -919,12 +920,32 @@ private boolean isKeyEmpty(OmKeyInfo keyInfo) {
 // underlying table using an iterator. That automatically creates a
 // snapshot of the data, so we don't need these locks at a higher level
 // when we iterate.
+
+startKey = normalizeListKeyPath(startKey);
+keyPrefix = normalizeListKeyPath(keyPrefix);
+
 List keyList = metadataManager.listKeys(volumeName, bucketName,
 startKey, keyPrefix, maxKeys);
 refreshPipeline(keyList);
 return keyList;
   }
 
+  private String normalizeListKeyPath(String keyPath) {
+
+String normalizeKeyPath = keyPath;
+if (enableFileSystemPaths) {
+  // For empty strings do nothing.
+  if (StringUtils.isBlank(keyPath)) {

Review comment:
   Thanks to explain it.
   
   >  the Paths method will fail with NPE, so this is also 
   
   Not a big deal, but if we move the empty check, to `normalizeKey`, the 
method will be safe forever, and we don't need to do the empty check all the 
time when we need to call it.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -919,12 +920,32 @@ private boolean isKeyEmpty(OmKeyInfo keyInfo) {
 // underlying table using an iterator. That automatically creates a
 // snapshot of the data, so we don't need these locks at a higher level
 // when we iterate.
+
+startKey = normalizeListKeyPath(startKey);
+keyPrefix = normalizeListKeyPath(keyPrefix);
+
 List keyList = metadataManager.listKeys(volumeName, bucketName,
 startKey, keyPrefix, maxKeys);
 refreshPipeline(keyList);
 return keyList;
   }
 
+  private String normalizeListKeyPath(String keyPath) {
+
+String normalizeKeyPath = keyPath;
+if (enableFileSystemPaths) {
+  // For empty strings do nothing.
+  if (StringUtils.isBlank(keyPath)) {

Review comment:
   BTW (just a discussion not a code review comment): it can be useful to 
differentiate between normalization (removing/resolving `..`, `//`) and 
handling closing `/`. My impression is that we need different rules for them.
   
   This is a good example here: you need the first ("normalization") but not 
the second. (Maybe two different method?)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub 

[GitHub] [hadoop-ozone] elek merged pull request #1083: HDDS-3814. Drop a column family through debug cli tool

2020-10-09 Thread GitBox


elek merged pull request #1083:
URL: https://github.com/apache/hadoop-ozone/pull/1083


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] umamaheswararao merged pull request #1485: HDDS-4325. Incompatible return codes from Ozone getconf -confKey

2020-10-09 Thread GitBox


umamaheswararao merged pull request #1485:
URL: https://github.com/apache/hadoop-ozone/pull/1485


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


bharatviswa504 commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r501443397



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequestV1.java
##
@@ -0,0 +1,289 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponse;
+import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.*;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.*;
+
+/**
+ * Handles create file request layout version1.
+ */
+public class OMFileCreateRequestV1 extends OMFileCreateRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequestV1.class);
+  public OMFileCreateRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  @SuppressWarnings("methodlength")
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+Map auditMap = buildKeyArgsAuditMap(keyArgs);
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+String keyName = keyArgs.getKeyName();
+
+// if isRecursive is true, file would be created even if parent
+// directories does not exist.
+boolean isRecursive = createFileRequest.getIsRecursive();
+if (LOG.isDebugEnabled()) {
+  LOG.debug("File create for : " + volumeName + "/" + bucketName + "/"
+  + keyName + ":" + isRecursive);
+}
+
+// if isOverWrite is true, file would be over written.
+boolean isOverWrite = createFileRequest.getIsOverwrite();
+
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumCreateFile();
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+boolean acquiredLock = false;
+
+OmVolumeArgs omVolumeArgs = null;
+OmBucketInfo omBucketInfo = null;
+final List locations = new ArrayList<>();
+List missingParentInfos;
+int numKeysCreated = 0;
+
+OMClientResponse omClientResponse = null;
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+IOException exception = null;
+Result result = null;
+try {
+  keyArgs = resolveBucketLink(ozoneManager, keyArgs, auditMap);
+  volumeName = keyArgs.getVolumeName();
+  bucketName = keyArgs.getBucketName();
+
+  if (keyName.length() == 0) {
+// Check if this is the root of the filesystem.
+throw new OMException("Can not write to directory: " + keyName,
+OMException.ResultCodes.NOT_A_FILE);
+  }
+
+  // check Acl
+  checkKeyAcls(ozoneManager, volumeName, bucketName, keyName,
+ 

[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1454: HDDS-4285. Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-10-09 Thread GitBox


adoroszlai commented on pull request #1454:
URL: https://github.com/apache/hadoop-ozone/pull/1454#issuecomment-705673221


   Thanks @xiaoyuyao for the suggestion to directly pass down the block token.  
I have updated the patch accordingly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on a change in pull request #1484: HDDS-4322. Add integration tests for Decommission and resolve issues detected by the tests.

2020-10-09 Thread GitBox


elek commented on a change in pull request #1484:
URL: https://github.com/apache/hadoop-ozone/pull/1484#discussion_r502419123



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitorImpl.java
##
@@ -294,6 +294,9 @@ private boolean 
checkContainersReplicatedOnNode(DatanodeDetails dn)
 "in containerManager", cid, dn);
   }
 }
+LOG.info("{} has {} sufficientlyReplicated, {} underReplicated and {} " +

Review comment:
   Do we need it on info level? What is the expected frequency of this 
message?
   
   I can create a decom insight point to make it easier to see this message, if 
it's required for debugging...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] umamaheswararao commented on pull request #1485: HDDS-4325. Incompatible return codes from Ozone getconf -confKey

2020-10-09 Thread GitBox


umamaheswararao commented on pull request #1485:
URL: https://github.com/apache/hadoop-ozone/pull/1485#issuecomment-705697044


   Thanks @adoroszlai for working on this. The changes looks good to me.
   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1458: HDDS-3728. Bucket space: check quotaUsageInBytes when write key and allocate block.

2020-10-09 Thread GitBox


captainzmc commented on a change in pull request #1458:
URL: https://github.com/apache/hadoop-ozone/pull/1458#discussion_r50227



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
##
@@ -218,6 +219,8 @@ public OMClientResponse validateAndUpdateCache(OzoneManager 
ozoneManager,
   // update usedBytes atomically.
   omVolumeArgs.getUsedBytes().add(preAllocatedSpace);
   omBucketInfo.getUsedBytes().add(preAllocatedSpace);
+  long vol = omVolumeArgs.getUsedBytes().sum();
+  long buk = omBucketInfo.getUsedBytes().sum();

Review comment:
   Redundancy code,  will delete this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] codecov-io commented on pull request #1428: HDDS-4192: enable SCM Raft Group based on config ozone.scm.names

2020-10-09 Thread GitBox


codecov-io commented on pull request #1428:
URL: https://github.com/apache/hadoop-ozone/pull/1428#issuecomment-706121048


   # 
[Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=h1) 
Report
   > Merging 
[#1428](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=desc) 
into 
[HDDS-2823](https://codecov.io/gh/apache/hadoop-ozone/commit/40127b3c2402a0cd279eded94764898a52a74c60?el=desc)
 will **decrease** coverage by `0.76%`.
   > The diff coverage is `71.37%`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/graphs/tree.svg?width=650=150=pr=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=tree)
   
   ```diff
   @@   Coverage Diff   @@
   ## HDDS-2823#1428  +/-   ##
   ===
   - Coverage73.36%   72.59%   -0.77% 
   - Complexity   1016610462 +296 
   ===
 Files  994 1030  +36 
 Lines5067652575+1899 
 Branches  4887 5008 +121 
   ===
   + Hits 3717738167 +990 
   - Misses   1115312003 +850 
   - Partials  2346 2405  +59 
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=tree) | 
Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | 
[...s/ratis/retrypolicy/RetryLimitedPolicyCreator.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9yYXRpcy9yZXRyeXBvbGljeS9SZXRyeUxpbWl0ZWRQb2xpY3lDcmVhdG9yLmphdmE=)
 | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | 
[.../java/org/apache/hadoop/ozone/OzoneConfigKeys.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25maWdLZXlzLmphdmE=)
 | `100.00% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | 
[...main/java/org/apache/hadoop/ozone/OzoneConsts.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25zdHMuamF2YQ==)
 | `85.71% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | 
[...iner/common/transport/server/XceiverServerSpi.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3RyYW5zcG9ydC9zZXJ2ZXIvWGNlaXZlclNlcnZlclNwaS5qYXZh)
 | `0.00% <0.00%> (ø)` | `0.00 <0.00> (ø)` | |
   | 
[.../transport/server/ratis/ContainerStateMachine.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3RyYW5zcG9ydC9zZXJ2ZXIvcmF0aXMvQ29udGFpbmVyU3RhdGVNYWNoaW5lLmphdmE=)
 | `71.07% <ø> (-5.83%)` | `62.00 <0.00> (-5.00)` | |
   | 
[...ache/hadoop/hdds/conf/DatanodeRatisGrpcConfig.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvZnJhbWV3b3JrL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9jb25mL0RhdGFub2RlUmF0aXNHcnBjQ29uZmlnLmphdmE=)
 | `0.00% <ø> (ø)` | `0.00 <0.00> (ø)` | |
   | 
[...apache/hadoop/hdds/scm/block/BlockManagerImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2Jsb2NrL0Jsb2NrTWFuYWdlckltcGwuamF2YQ==)
 | `74.77% <ø> (+0.90%)` | `20.00 <0.00> (+1.00)` | |
   | 
[...hdds/scm/container/CloseContainerEventHandler.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9DbG9zZUNvbnRhaW5lckV2ZW50SGFuZGxlci5qYXZh)
 | `89.65% <ø> (ø)` | `6.00 <0.00> (ø)` | |
   | 
[...java/org/apache/hadoop/hdds/scm/ha/SCMHAUtils.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2hhL1NDTUhBVXRpbHMuamF2YQ==)
 | `11.11% <ø> (-35.05%)` | `2.00 <0.00> (-4.00)` | |
   | 
[...che/hadoop/hdds/scm/metadata/ContainerIDCodec.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL21ldGFkYXRhL0NvbnRhaW5lcklEQ29kZWMuamF2YQ==)
 | `60.00% <0.00%> (ø)` | `2.00 <0.00> (ø)` | |
   | ... and [271 
more](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree-more)
 | |
   
   --
   
   [Continue to review full report at 
Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=continue).
   > **Legend** - [Click here to learn 

[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1485: HDDS-4325. Incompatible return codes from Ozone getconf -confKey

2020-10-09 Thread GitBox


adoroszlai commented on pull request #1485:
URL: https://github.com/apache/hadoop-ozone/pull/1485#issuecomment-705710675


   Thanks @umamaheswararao for reviewing and merging it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong closed pull request #1407: HDDS-4158. Provide a class type for Java based configuration

2020-10-09 Thread GitBox


maobaolong closed pull request #1407:
URL: https://github.com/apache/hadoop-ozone/pull/1407


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] prashantpogde edited a comment on pull request #1480: HDDS-4315. Use Epoch to generate unique ObjectIDs

2020-10-09 Thread GitBox


prashantpogde edited a comment on pull request #1480:
URL: https://github.com/apache/hadoop-ozone/pull/1480#issuecomment-705775239


   General comment on using the epoch id that increments with every OM restart. 
This can get tricky.
If OM goes in crash restart loop then we have just 2^16 increments 
available which is 65K attempts. if it takes 1 secs for OM to comeback online 
we have 65 K secs worth epoch number or 20 hours of crash looping. This is very 
pessimistic view, it may take several seconds for OM to restart but it does 
show how
   - 16 bit space can be insufficient for this scheme.
   - epoch need not be dependent on restart based increment. if it increments 
based on both of the following conditions
   A) OM restart +
   B) some object gets created after epoch id is incremented
  then epoch may last longer. But even then 16 bit looks insufficient. What 
 if OM creates one object and restarts in a loop.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4312) findbugs check succeeds despite compile error

2020-10-09 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4312:
--
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> findbugs check succeeds despite compile error
> -
>
> Key: HDDS-4312
> URL: https://issues.apache.org/jira/browse/HDDS-4312
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.1.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Findbugs check has been silently failing but reporting success for some time 
> now.  The problem is that {{findbugs.sh}} determines exit code based on the 
> number of findbugs failures.  If {{compile}} step fails, exit code is 0, ie. 
> success.
> {code:title=https://github.com/apache/hadoop-ozone/runs/1210535433#step:3:866}
> 2020-10-02T18:37:57.0699502Z [ERROR] Failed to execute goal on project 
> hadoop-hdds-client: Could not resolve dependencies for project 
> org.apache.hadoop:hadoop-hdds-client:jar:1.1.0-SNAPSHOT: Could not find 
> artifact org.apache.hadoop:hadoop-hdds-common:jar:tests:1.1.0-SNAPSHOT in 
> apache.snapshots.https 
> (https://repository.apache.org/content/repositories/snapshots) -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1458: HDDS-3728. Bucket space: check quotaUsageInBytes when write key and allocate block.

2020-10-09 Thread GitBox


ChenSammi commented on a change in pull request #1458:
URL: https://github.com/apache/hadoop-ozone/pull/1458#discussion_r502231114



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
##
@@ -218,6 +219,8 @@ public OMClientResponse validateAndUpdateCache(OzoneManager 
ozoneManager,
   // update usedBytes atomically.
   omVolumeArgs.getUsedBytes().add(preAllocatedSpace);
   omBucketInfo.getUsedBytes().add(preAllocatedSpace);
+  long vol = omVolumeArgs.getUsedBytes().sum();
+  long buk = omBucketInfo.getUsedBytes().sum();

Review comment:
   Are these two used somewhere? 

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
##
@@ -279,11 +279,12 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
   omVolumeArgs = getVolumeInfo(omMetadataManager, volumeName);
   omBucketInfo = getBucketInfo(omMetadataManager, volumeName, bucketName);
-  // check volume quota
+  // check volume and bucket quota
   long preAllocatedSpace = newLocationList.size()
   * ozoneManager.getScmBlockSize()
   * omKeyInfo.getFactor().getNumber();
   checkVolumeQuotaInBytes(omVolumeArgs, preAllocatedSpace);
+  checkBucketQuotaInBytes(omBucketInfo, preAllocatedSpace);

Review comment:
   Better check bucket quota before volume quota check.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

2020-10-09 Thread GitBox


bharatviswa504 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r501938720



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Base class for responses that need to move keys from an arbitrary table to
+ * the deleted table.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+  @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+super(omResponse);
+this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+super(omResponse);
+checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void addDeletionToBatch(
+  OMMetadataManager omMetadataManager,
+  BatchOperation batchOperation,
+  Table fromTable,
+  String keyName,
+  OmKeyInfo omKeyInfo) throws IOException {
+
+addDeletionToBatch(omMetadataManager, batchOperation, fromTable, keyName,
+omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void addDeletionToBatch(

Review comment:
   Minor: Can we merge these two functions.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import 

[GitHub] [hadoop-ozone] elek commented on a change in pull request #1484: HDDS-4322. Add integration tests for Decommission and resolve issues detected by the tests.

2020-10-09 Thread GitBox


elek commented on a change in pull request #1484:
URL: https://github.com/apache/hadoop-ozone/pull/1484#discussion_r502419123



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitorImpl.java
##
@@ -294,6 +294,9 @@ private boolean 
checkContainersReplicatedOnNode(DatanodeDetails dn)
 "in containerManager", cid, dn);
   }
 }
+LOG.info("{} has {} sufficientlyReplicated, {} underReplicated and {} " +

Review comment:
   Do we need it on info level? What is the expected frequency of this 
message?
   
   I can create a decom insight point to make it easier to see this message, if 
it's required for debugging...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek merged pull request #1477: HDDS-4311. Type-safe config design doc points to OM HA

2020-10-09 Thread GitBox


elek merged pull request #1477:
URL: https://github.com/apache/hadoop-ozone/pull/1477


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4311) Type-safe config design doc points to OM HA

2020-10-09 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4311:
--
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Type-safe config design doc points to OM HA
> ---
>
> Key: HDDS-4311
> URL: https://issues.apache.org/jira/browse/HDDS-4311
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
> Fix For: 1.1.0
>
>
> Abstract and links for 
> http://hadoop.apache.org/ozone/docs/1.0.0/design/typesafeconfig.html are 
> wrong, reference OM HA design doc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4311) Type-safe config design doc points to OM HA

2020-10-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4311:
-
Labels: pull-request-available  (was: )

> Type-safe config design doc points to OM HA
> ---
>
> Key: HDDS-4311
> URL: https://issues.apache.org/jira/browse/HDDS-4311
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Abstract and links for 
> http://hadoop.apache.org/ozone/docs/1.0.0/design/typesafeconfig.html are 
> wrong, reference OM HA design doc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3814) Drop a column family through debug ldb tool

2020-10-09 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-3814.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> Drop a column family through debug ldb tool
> ---
>
> Key: HDDS-3814
> URL: https://issues.apache.org/jira/browse/HDDS-3814
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 1.1.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek merged pull request #1083: HDDS-3814. Drop a column family through debug cli tool

2020-10-09 Thread GitBox


elek merged pull request #1083:
URL: https://github.com/apache/hadoop-ozone/pull/1083


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #1083: HDDS-3814. Drop a column family through debug cli tool

2020-10-09 Thread GitBox


elek commented on pull request #1083:
URL: https://github.com/apache/hadoop-ozone/pull/1083#issuecomment-706156646


   Merging it now. Thanks the review @bharatviswa504 and @avijayanhwx and the 
patch @maobaolong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] codecov-io edited a comment on pull request #1428: HDDS-4192: enable SCM Raft Group based on config ozone.scm.names

2020-10-09 Thread GitBox


codecov-io edited a comment on pull request #1428:
URL: https://github.com/apache/hadoop-ozone/pull/1428#issuecomment-706121048


   # 
[Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=h1) 
Report
   > Merging 
[#1428](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=desc) 
into 
[HDDS-2823](https://codecov.io/gh/apache/hadoop-ozone/commit/40127b3c2402a0cd279eded94764898a52a74c60?el=desc)
 will **decrease** coverage by `0.76%`.
   > The diff coverage is `71.35%`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/graphs/tree.svg?width=650=150=pr=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=tree)
   
   ```diff
   @@   Coverage Diff   @@
   ## HDDS-2823#1428  +/-   ##
   ===
   - Coverage73.36%   72.59%   -0.77% 
   - Complexity   1016610464 +298 
   ===
 Files  994 1030  +36 
 Lines5067652574+1898 
 Branches  4887 5008 +121 
   ===
   + Hits 3717738167 +990 
   - Misses   1115312005 +852 
   - Partials  2346 2402  +56 
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=tree) | 
Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | 
[...s/ratis/retrypolicy/RetryLimitedPolicyCreator.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9yYXRpcy9yZXRyeXBvbGljeS9SZXRyeUxpbWl0ZWRQb2xpY3lDcmVhdG9yLmphdmE=)
 | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | 
[.../java/org/apache/hadoop/ozone/OzoneConfigKeys.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25maWdLZXlzLmphdmE=)
 | `100.00% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | 
[...main/java/org/apache/hadoop/ozone/OzoneConsts.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25zdHMuamF2YQ==)
 | `85.71% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | 
[...iner/common/transport/server/XceiverServerSpi.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3RyYW5zcG9ydC9zZXJ2ZXIvWGNlaXZlclNlcnZlclNwaS5qYXZh)
 | `0.00% <0.00%> (ø)` | `0.00 <0.00> (ø)` | |
   | 
[.../transport/server/ratis/ContainerStateMachine.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3RyYW5zcG9ydC9zZXJ2ZXIvcmF0aXMvQ29udGFpbmVyU3RhdGVNYWNoaW5lLmphdmE=)
 | `70.85% <ø> (-6.06%)` | `61.00 <0.00> (-6.00)` | |
   | 
[...ache/hadoop/hdds/conf/DatanodeRatisGrpcConfig.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvZnJhbWV3b3JrL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9jb25mL0RhdGFub2RlUmF0aXNHcnBjQ29uZmlnLmphdmE=)
 | `0.00% <ø> (ø)` | `0.00 <0.00> (ø)` | |
   | 
[...apache/hadoop/hdds/scm/block/BlockManagerImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2Jsb2NrL0Jsb2NrTWFuYWdlckltcGwuamF2YQ==)
 | `74.77% <ø> (+0.90%)` | `20.00 <0.00> (+1.00)` | |
   | 
[...hdds/scm/container/CloseContainerEventHandler.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9DbG9zZUNvbnRhaW5lckV2ZW50SGFuZGxlci5qYXZh)
 | `89.65% <ø> (ø)` | `6.00 <0.00> (ø)` | |
   | 
[...java/org/apache/hadoop/hdds/scm/ha/SCMHAUtils.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2hhL1NDTUhBVXRpbHMuamF2YQ==)
 | `11.11% <ø> (-35.05%)` | `2.00 <0.00> (-4.00)` | |
   | 
[...che/hadoop/hdds/scm/metadata/ContainerIDCodec.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL21ldGFkYXRhL0NvbnRhaW5lcklEQ29kZWMuamF2YQ==)
 | `60.00% <0.00%> (ø)` | `2.00 <0.00> (ø)` | |
   | ... and [272 
more](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree-more)
 | |
   
   --
   
   [Continue to review full report at 
Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=continue).
   > **Legend** - [Click here to learn 

[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


linyiqun commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502393095



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
##
@@ -227,6 +247,9 @@ protected OmMetadataManagerImpl() {
 
   @Override
   public Table getOpenKeyTable() {
+if (enableFSPaths && OzoneManagerRatisUtils.isOmLayoutVersionV1()) {

Review comment:
   @rakeshadr , the V1 feature is a new key format and is not compatible 
with old format. enableFSPaths flag behaves as a switch here, why we remove 
this? Not so fully understand this.
   Without enableFSPaths check, we will directly use new key format by default 
once V1 feature is supported.
   Please correct me if I am wrong.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3728) Bucket space: check quotaUsageInBytes when write key

2020-10-09 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao resolved HDDS-3728.
-
Resolution: Fixed

> Bucket space: check quotaUsageInBytes when write key
> 
>
> Key: HDDS-3728
> URL: https://issues.apache.org/jira/browse/HDDS-3728
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3728) Bucket space: check quotaUsageInBytes when write key

2020-10-09 Thread mingchao zhao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17210881#comment-17210881
 ] 

mingchao zhao commented on HDDS-3728:
-

PR has been merged, close this.

> Bucket space: check quotaUsageInBytes when write key
> 
>
> Key: HDDS-3728
> URL: https://issues.apache.org/jira/browse/HDDS-3728
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3728) Bucket space: check quotaUsageInBytes when write key

2020-10-09 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao updated HDDS-3728:

Fix Version/s: 1.1.0

> Bucket space: check quotaUsageInBytes when write key
> 
>
> Key: HDDS-3728
> URL: https://issues.apache.org/jira/browse/HDDS-3728
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi merged pull request #1458: HDDS-3728. Bucket space: check quotaUsageInBytes when write key and allocate block.

2020-10-09 Thread GitBox


ChenSammi merged pull request #1458:
URL: https://github.com/apache/hadoop-ozone/pull/1458


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1458: HDDS-3728. Bucket space: check quotaUsageInBytes when write key and allocate block.

2020-10-09 Thread GitBox


ChenSammi commented on pull request #1458:
URL: https://github.com/apache/hadoop-ozone/pull/1458#issuecomment-706132636


   LGTM +1.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on a change in pull request #1451: HDDS-4117. Normalize Keypath for listKeys.

2020-10-09 Thread GitBox


elek commented on a change in pull request #1451:
URL: https://github.com/apache/hadoop-ozone/pull/1451#discussion_r502358863



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -919,12 +920,32 @@ private boolean isKeyEmpty(OmKeyInfo keyInfo) {
 // underlying table using an iterator. That automatically creates a
 // snapshot of the data, so we don't need these locks at a higher level
 // when we iterate.
+
+startKey = normalizeListKeyPath(startKey);
+keyPrefix = normalizeListKeyPath(keyPrefix);
+
 List keyList = metadataManager.listKeys(volumeName, bucketName,
 startKey, keyPrefix, maxKeys);
 refreshPipeline(keyList);
 return keyList;
   }
 
+  private String normalizeListKeyPath(String keyPath) {
+
+String normalizeKeyPath = keyPath;
+if (enableFileSystemPaths) {
+  // For empty strings do nothing.
+  if (StringUtils.isBlank(keyPath)) {

Review comment:
   BTW (just a discussion not a code review comment): it can be useful to 
differentiate between normalization (removing/resolving `..`, `//`) and 
handling closing `/`. My impression is that we need different rules for them.
   
   This is a good example here: you need the first ("normalization") but not 
the second. (Maybe two different method?)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on a change in pull request #1451: HDDS-4117. Normalize Keypath for listKeys.

2020-10-09 Thread GitBox


elek commented on a change in pull request #1451:
URL: https://github.com/apache/hadoop-ozone/pull/1451#discussion_r502357778



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -919,12 +920,32 @@ private boolean isKeyEmpty(OmKeyInfo keyInfo) {
 // underlying table using an iterator. That automatically creates a
 // snapshot of the data, so we don't need these locks at a higher level
 // when we iterate.
+
+startKey = normalizeListKeyPath(startKey);
+keyPrefix = normalizeListKeyPath(keyPrefix);
+
 List keyList = metadataManager.listKeys(volumeName, bucketName,
 startKey, keyPrefix, maxKeys);
 refreshPipeline(keyList);
 return keyList;
   }
 
+  private String normalizeListKeyPath(String keyPath) {
+
+String normalizeKeyPath = keyPath;
+if (enableFileSystemPaths) {
+  // For empty strings do nothing.
+  if (StringUtils.isBlank(keyPath)) {

Review comment:
   Thanks to explain it.
   
   >  the Paths method will fail with NPE, so this is also 
   
   Not a big deal, but if we move the empty check, to `normalizeKey`, the 
method will be safe forever, and we don't need to do the empty check all the 
time when we need to call it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on a change in pull request #1451: HDDS-4117. Normalize Keypath for listKeys.

2020-10-09 Thread GitBox


elek commented on a change in pull request #1451:
URL: https://github.com/apache/hadoop-ozone/pull/1451#discussion_r502356956



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
##
@@ -298,21 +296,11 @@ public static String validateAndNormalizeKey(boolean 
enableFileSystemPaths,
 }
   }
 
-  @SuppressFBWarnings("DMI_HARDCODED_ABSOLUTE_FILENAME")
+
   public static String validateAndNormalizeKey(String keyName)
   throws OMException {
-String normalizedKeyName;
-if (keyName.startsWith(OM_KEY_PREFIX)) {
-  normalizedKeyName = Paths.get(keyName).toUri().normalize().getPath();
-} else {
-  normalizedKeyName = Paths.get(OM_KEY_PREFIX, keyName).toUri()
-  .normalize().getPath();
-}
-if (!keyName.equals(normalizedKeyName)) {
-  LOG.debug("Normalized key {} to {} ", keyName,
-  normalizedKeyName.substring(1));
-}
-return isValidKeyPath(normalizedKeyName.substring(1));
+String normalizedKeyName = OmUtils.normalizeKey(keyName);
+return isValidKeyPath(normalizedKeyName);

Review comment:
   Got it, thanks a lot. These are the cases where the normalization will 
result an invalid path due to the too many `..` (for example)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] codecov-io commented on pull request #1428: HDDS-4192: enable SCM Raft Group based on config ozone.scm.names

2020-10-09 Thread GitBox


codecov-io commented on pull request #1428:
URL: https://github.com/apache/hadoop-ozone/pull/1428#issuecomment-706121048


   # 
[Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=h1) 
Report
   > Merging 
[#1428](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=desc) 
into 
[HDDS-2823](https://codecov.io/gh/apache/hadoop-ozone/commit/40127b3c2402a0cd279eded94764898a52a74c60?el=desc)
 will **decrease** coverage by `0.76%`.
   > The diff coverage is `71.37%`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/graphs/tree.svg?width=650=150=pr=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=tree)
   
   ```diff
   @@   Coverage Diff   @@
   ## HDDS-2823#1428  +/-   ##
   ===
   - Coverage73.36%   72.59%   -0.77% 
   - Complexity   1016610462 +296 
   ===
 Files  994 1030  +36 
 Lines5067652575+1899 
 Branches  4887 5008 +121 
   ===
   + Hits 3717738167 +990 
   - Misses   1115312003 +850 
   - Partials  2346 2405  +59 
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=tree) | 
Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | 
[...s/ratis/retrypolicy/RetryLimitedPolicyCreator.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9yYXRpcy9yZXRyeXBvbGljeS9SZXRyeUxpbWl0ZWRQb2xpY3lDcmVhdG9yLmphdmE=)
 | `0.00% <0.00%> (ø)` | `0.00 <0.00> (?)` | |
   | 
[.../java/org/apache/hadoop/ozone/OzoneConfigKeys.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25maWdLZXlzLmphdmE=)
 | `100.00% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | 
[...main/java/org/apache/hadoop/ozone/OzoneConsts.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvT3pvbmVDb25zdHMuamF2YQ==)
 | `85.71% <ø> (ø)` | `1.00 <0.00> (ø)` | |
   | 
[...iner/common/transport/server/XceiverServerSpi.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3RyYW5zcG9ydC9zZXJ2ZXIvWGNlaXZlclNlcnZlclNwaS5qYXZh)
 | `0.00% <0.00%> (ø)` | `0.00 <0.00> (ø)` | |
   | 
[.../transport/server/ratis/ContainerStateMachine.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3RyYW5zcG9ydC9zZXJ2ZXIvcmF0aXMvQ29udGFpbmVyU3RhdGVNYWNoaW5lLmphdmE=)
 | `71.07% <ø> (-5.83%)` | `62.00 <0.00> (-5.00)` | |
   | 
[...ache/hadoop/hdds/conf/DatanodeRatisGrpcConfig.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvZnJhbWV3b3JrL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9jb25mL0RhdGFub2RlUmF0aXNHcnBjQ29uZmlnLmphdmE=)
 | `0.00% <ø> (ø)` | `0.00 <0.00> (ø)` | |
   | 
[...apache/hadoop/hdds/scm/block/BlockManagerImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2Jsb2NrL0Jsb2NrTWFuYWdlckltcGwuamF2YQ==)
 | `74.77% <ø> (+0.90%)` | `20.00 <0.00> (+1.00)` | |
   | 
[...hdds/scm/container/CloseContainerEventHandler.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2NvbnRhaW5lci9DbG9zZUNvbnRhaW5lckV2ZW50SGFuZGxlci5qYXZh)
 | `89.65% <ø> (ø)` | `6.00 <0.00> (ø)` | |
   | 
[...java/org/apache/hadoop/hdds/scm/ha/SCMHAUtils.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL2hhL1NDTUhBVXRpbHMuamF2YQ==)
 | `11.11% <ø> (-35.05%)` | `2.00 <0.00> (-4.00)` | |
   | 
[...che/hadoop/hdds/scm/metadata/ContainerIDCodec.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL21ldGFkYXRhL0NvbnRhaW5lcklEQ29kZWMuamF2YQ==)
 | `60.00% <0.00%> (ø)` | `2.00 <0.00> (ø)` | |
   | ... and [271 
more](https://codecov.io/gh/apache/hadoop-ozone/pull/1428/diff?src=pr=tree-more)
 | |
   
   --
   
   [Continue to review full report at 
Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1428?src=pr=continue).
   > **Legend** - [Click here to learn 

[GitHub] [hadoop-ozone] elek commented on a change in pull request #1451: HDDS-4117. Normalize Keypath for listKeys.

2020-10-09 Thread GitBox


elek commented on a change in pull request #1451:
URL: https://github.com/apache/hadoop-ozone/pull/1451#discussion_r502352349



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -919,12 +920,32 @@ private boolean isKeyEmpty(OmKeyInfo keyInfo) {
 // underlying table using an iterator. That automatically creates a
 // snapshot of the data, so we don't need these locks at a higher level
 // when we iterate.
+
+startKey = normalizeListKeyPath(startKey);
+keyPrefix = normalizeListKeyPath(keyPrefix);

Review comment:
   Sorry, I don't get it. Why do we need to check it _inside 
normalizeListKeyPath_.
   
   As I wrote it's a very minor thing, but it seems to be better to move out 
the check from the method because it improves the readability (IMHO!).
   
   When somebody read the `listKeys` method it suggest the the keys are 
normalized. but in fact it's normalized only if enableFileSystemPaths. This can 
be confusing as the method name is `normalizeListKeyPath` and not something 
like `normalizeListKeyPathIfNormalizationIsEnabled`.
   
   I suggested moving out this condition from this method to improve the 
readibility, but if you think it's a bad idea, it can be ignored.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1473: HDDS-4266: CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-09 Thread GitBox


rakeshadr commented on a change in pull request #1473:
URL: https://github.com/apache/hadoop-ozone/pull/1473#discussion_r502350594



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
##
@@ -227,6 +247,9 @@ protected OmMetadataManagerImpl() {
 
   @Override
   public Table getOpenKeyTable() {
+if (enableFSPaths && OzoneManagerRatisUtils.isOmLayoutVersionV1()) {

Review comment:
   Good catch @bharatviswa504. Please feel free to add if anything else 
needed. Thanks again!
   
   Based on our offline discussions, below is the expected behavior for diff 
requests:
   
   **V1 feature version** : Following ops shouldn't depend on enableFSPaths flag
   1) FileCreate  -> Look into dirTable for parents. Then create entries in 
openFileTable and on close add it to fileTable.
   2) DirCreate  -> Create entries in dirTable
   3) File/DirDelete -> Look into fileTable and dirTable for the keys.
   4) File/DirRename-> Look into fileTable and dirTable for the keys.
   
   **V1 feature version & enableFSPaths=true**
   1) KeyCreate ---> Look into dirTable for parents. Create entries in 
openFileTable and on close add it to fileTable.
   2) KeyDelete ---> Look into fileTable and dirTable for the keys.
   3) KeyRename -> supported only in ozone shell. It should look into fileTable 
and dirTable for the keys.
   
   **V1 feature version & enableFSPaths=false**
   1) KeyCreate ---> Create entries in openKeyTable and on close add it to 
keyTable.
   2) KeyDelete ---> Look into keyTable for the keys.
   3) KeyRename -> supported only in ozone shell. It should look into keyTable 
for the keys.
   
   In this PR, will handle only `FileCreate` request and not provided checks 
for enableFSPaths in KeyCommit. Will do this changes in latest commit.
   
   Later, I will raise subsequent jiras for handling KeyCreate/KeyCommit and 
other ops.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4209) S3A Filesystem does not work with Ozone S3 in file system compat mode

2020-10-09 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17210831#comment-17210831
 ] 

Marton Elek commented on HDDS-4209:
---

If not, It might be better to add this information to the documentation.

There is an open pull request for this jira. Shall we close it?

> S3A Filesystem does not work with Ozone S3 in file system compat mode
> -
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: OzoneS3, S3A, pull-request-available
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1458: HDDS-3728. Bucket space: check quotaUsageInBytes when write key and allocate block.

2020-10-09 Thread GitBox


captainzmc commented on a change in pull request #1458:
URL: https://github.com/apache/hadoop-ozone/pull/1458#discussion_r50227



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
##
@@ -218,6 +219,8 @@ public OMClientResponse validateAndUpdateCache(OzoneManager 
ozoneManager,
   // update usedBytes atomically.
   omVolumeArgs.getUsedBytes().add(preAllocatedSpace);
   omBucketInfo.getUsedBytes().add(preAllocatedSpace);
+  long vol = omVolumeArgs.getUsedBytes().sum();
+  long buk = omBucketInfo.getUsedBytes().sum();

Review comment:
   Redundancy code,  will delete this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4308) Fix issue with quota update

2020-10-09 Thread mingchao zhao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17210583#comment-17210583
 ] 

mingchao zhao edited comment on HDDS-4308 at 10/9/20, 7:31 AM:
---

Hi [~bharat] Sorry for didn't recover in time due to the holiday.

I get your point, because the cache object of volumeArgs in OM is unique. The 
usedBytes in the unique object may be modified when DB is updated. 
Using a cache avoids volume locking, which has a minimal performance impact. 
But I did not  notice the problem you mentioned above.
I think the better solution here is copy of a new volumeArgs object in the 
Request before addResponseToDoubleBuffer. Of course, during the copy process, 
we need to lock the object volumeArgs in case other operations change it.

Any other suggestion here?




was (Author: micahzhao):
Hi [~bharat] Sorry for didn't recover in time due to the holiday.

I get your point, because the cache object of volumeArgs in OM is unique. The 
usedBytes in the unique object may be modified when DB is updated. 
Using a cache avoids volume locking, which has a minimal performance impact. 
But I did not  notice the problem you mentioned above.
I think the better solution here is copy of a new volumeArgs object in the 
Request before addResponseToDoubleBuffer. Of course, during the copy process, 
we need to lock the object volumeArgs in case other operations change it.

Any other solution here?



> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1458: HDDS-3728. Bucket space: check quotaUsageInBytes when write key and allocate block.

2020-10-09 Thread GitBox


ChenSammi commented on a change in pull request #1458:
URL: https://github.com/apache/hadoop-ozone/pull/1458#discussion_r502232094



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
##
@@ -279,11 +279,12 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
   omVolumeArgs = getVolumeInfo(omMetadataManager, volumeName);
   omBucketInfo = getBucketInfo(omMetadataManager, volumeName, bucketName);
-  // check volume quota
+  // check volume and bucket quota
   long preAllocatedSpace = newLocationList.size()
   * ozoneManager.getScmBlockSize()
   * omKeyInfo.getFactor().getNumber();
   checkVolumeQuotaInBytes(omVolumeArgs, preAllocatedSpace);
+  checkBucketQuotaInBytes(omBucketInfo, preAllocatedSpace);

Review comment:
   Better check bucket quota before volume quota check.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1458: HDDS-3728. Bucket space: check quotaUsageInBytes when write key and allocate block.

2020-10-09 Thread GitBox


ChenSammi commented on a change in pull request #1458:
URL: https://github.com/apache/hadoop-ozone/pull/1458#discussion_r502231114



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
##
@@ -218,6 +219,8 @@ public OMClientResponse validateAndUpdateCache(OzoneManager 
ozoneManager,
   // update usedBytes atomically.
   omVolumeArgs.getUsedBytes().add(preAllocatedSpace);
   omBucketInfo.getUsedBytes().add(preAllocatedSpace);
+  long vol = omVolumeArgs.getUsedBytes().sum();
+  long buk = omBucketInfo.getUsedBytes().sum();

Review comment:
   Are these two used somewhere? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org