[GitHub] [hadoop] bshashikant commented on issue #846: HDDS-1555. Disable install snapshot for ContainerStateMachine.

2019-05-29 Thread GitBox
bshashikant commented on issue #846: HDDS-1555. Disable install snapshot for 
ContainerStateMachine.
URL: https://github.com/apache/hadoop/pull/846#issuecomment-497213904
 
 
   Thanks @swagle for working on this. The changes look good. I have just one 
point to make here:
   Add a javadoc/comments for handleInstallSnapshotFromLeader() specifying the 
reason to close down the pipeline as well for disabling the 
installSnapshotEnabled in the code.
   
   I am +1 after that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename

2019-05-29 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851518#comment-16851518
 ] 

Aaron Fabbri commented on HADOOP-15183:
---

Looking good [~ste...@apache.org]. A lot of nice improvements here. Looks good 
to me so far. Still have a couple of files to work through (large diff) in the 
latest PR.

> S3Guard store becomes inconsistent after partial failure of rename
> --
>
> Key: HADOOP-15183
> URL: https://issues.apache.org/jira/browse/HADOOP-15183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15183-001.patch, HADOOP-15183-002.patch, 
> org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt
>
>
> If an S3A rename() operation fails partway through, such as when the user 
> doesn't have permissions to delete the source files after copying to the 
> destination, then the s3guard view of the world ends up inconsistent. In 
> particular the sequence
>  (assuming src/file* is a list of files file1...file10 and read only to 
> caller)
>
> # create file rename src/file1 dest/ ; expect AccessDeniedException in the 
> delete, dest/file1 will exist
> # delete file dest/file1
> # rename src/file* dest/  ; expect failure
> # list dest; you will not see dest/file1
> You will not see file1 in the listing, presumably because it will have a 
> tombstone marker and the update at the end of the rename() didn't take place: 
> the old data is still there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-29 Thread GitBox
ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store 
becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/843#discussion_r288854348
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -1225,130 +1301,292 @@ private boolean innerRename(Path source, Path dest)
   }
 }
 
-// If we have a MetadataStore, track deletions/creations.
-Collection srcPaths = null;
-List dstMetas = null;
-if (hasMetadataStore()) {
-  srcPaths = new HashSet<>(); // srcPaths need fast look up before put
-  dstMetas = new ArrayList<>();
-}
-// TODO S3Guard HADOOP-13761: retries when source paths are not visible yet
+// Validation completed: time to begin the operation.
+// The store-specific rename operation is used to keep the store
+// to date with the in-progress operation.
+// for the null store, these are all no-ops.
+final RenameTracker renameTracker =
+metadataStore.initiateRenameOperation(
+createStoreContext(),
+src, srcStatus, dest);
+final AtomicLong bytesCopied = new AtomicLong();
+int renameParallelLimit = RENAME_PARALLEL_LIMIT;
+final List> activeCopies =
+new ArrayList<>(renameParallelLimit);
+// aggregate operation to wait for the copies to complete then reset
+// the list.
+final FunctionsRaisingIOE.FunctionRaisingIOE
+completeActiveCopies = (String reason) -> {
+  LOG.debug("Waiting for {} active copies to complete: {}",
+  activeCopies.size(), reason);
+  waitForCompletion(activeCopies);
+  activeCopies.clear();
+  return null;
+};
+
 // TODO S3Guard: performance: mark destination dirs as authoritative
 
 // Ok! Time to start
-if (srcStatus.isFile()) {
-  LOG.debug("rename: renaming file {} to {}", src, dst);
-  long length = srcStatus.getLen();
-  S3ObjectAttributes objectAttributes =
-  createObjectAttributes(srcStatus.getPath(),
-  srcStatus.getETag(), srcStatus.getVersionId());
-  S3AReadOpContext readContext = createReadContext(srcStatus, inputPolicy,
-  changeDetectionPolicy, readAhead);
-  if (dstStatus != null && dstStatus.isDirectory()) {
-String newDstKey = maybeAddTrailingSlash(dstKey);
-String filename =
-srcKey.substring(pathToKey(src.getParent()).length()+1);
-newDstKey = newDstKey + filename;
-CopyResult copyResult = copyFile(srcKey, newDstKey, length,
-objectAttributes, readContext);
-S3Guard.addMoveFile(metadataStore, srcPaths, dstMetas, src,
-keyToQualifiedPath(newDstKey), length, getDefaultBlockSize(dst),
-username, copyResult.getETag(), copyResult.getVersionId());
+try {
+  if (srcStatus.isFile()) {
+// the source is a file.
+Path copyDestinationPath = dst;
+String copyDestinationKey = dstKey;
+S3ObjectAttributes sourceAttributes =
+createObjectAttributes(srcStatus);
+S3AReadOpContext readContext = createReadContext(srcStatus, 
inputPolicy,
+changeDetectionPolicy, readAhead);
+if (dstStatus != null && dstStatus.isDirectory()) {
+  // destination is a directory: build the final destination underneath
+  String newDstKey = maybeAddTrailingSlash(dstKey);
+  String filename =
+  srcKey.substring(pathToKey(src.getParent()).length() + 1);
+  newDstKey = newDstKey + filename;
+  copyDestinationKey = newDstKey;
+  copyDestinationPath = keyToQualifiedPath(newDstKey);
+}
+// destination either does not exist or is a file to overwrite.
+LOG.debug("rename: renaming file {} to {}", src, copyDestinationPath);
+copySourceAndUpdateTracker(renameTracker,
+src,
+srcKey,
+sourceAttributes,
+readContext,
+copyDestinationPath,
+copyDestinationKey,
+false);
+bytesCopied.addAndGet(srcStatus.getLen());
+ // delete the source
+deleteObjectAtPath(src, srcKey, true);
+// and update the tracker
+renameTracker.sourceObjectsDeleted(Lists.newArrayList(src));
   } else {
-CopyResult copyResult = copyFile(srcKey, dstKey, srcStatus.getLen(),
-objectAttributes, readContext);
-S3Guard.addMoveFile(metadataStore, srcPaths, dstMetas, src, dst,
-length, getDefaultBlockSize(dst), username,
-copyResult.getETag(), copyResult.getVersionId());
-  }
-  innerDelete(srcStatus, false);
-} else {
-  LOG.debug("rename: renaming directory {} to {}", src, dst);
-
-  // This is a directory to directory copy
-  dstKey = maybeAddTrailingSlash(dstKey);
-  srcKey = maybeAddTrailingSlash(srcKey);
+

[GitHub] [hadoop] ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-29 Thread GitBox
ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store 
becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/843#discussion_r288790499
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -730,21 +779,29 @@ public S3AEncryptionMethods 
getServerSideEncryptionAlgorithm() {
 
   /**
* Demand create the directory allocator, then create a temporary file.
+   * This does not mark the file for deletion when a process is exits.
 
 Review comment:
   nit: /process is exits/process exits/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-29 Thread GitBox
ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store 
becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/843#discussion_r288790196
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -232,8 +248,33 @@
   /** Principal who created the FS; recorded during initialization. */
   private UserGroupInformation owner;
 
-  // The maximum number of entries that can be deleted in any call to s3
+  /**
+   * The maximum number of entries that can be deleted in any bulk delete
+   * call to S3 {@value}.
+   */
   private static final int MAX_ENTRIES_TO_DELETE = 1000;
+
+  /**
+   * This is an arbitrary value: {@value}.
+   * It declares how many parallel copy operations
+   * in a single rename can be queued before the operation pauses
+   * and awaits completion.
+   * A very large value wouldn't just starve other threads from
+   * performing work, there's a risk that the S3 store itself would
+   * throttle operations (which all go to the same shard).
+   * It is not currently configurable just to avoid people choosing values
+   * which work on a microbenchmark (single rename, no other work, ...)
+   * but don't scale well to execution in a large process against a common
+   * store, all while separate processes are working with the same shard
+   * of storage.
+   *
+   * It should be a factor of {@link #MAX_ENTRIES_TO_DELETE} so that
+   * all copies will have finished before deletion is contemplated.
+   * (There's always a block for that, it just makes more sense to
+   * perform the bulk delete after another block of copies have completed).
+   */
+  public static final int RENAME_PARALLEL_LIMIT = 10;
 
 Review comment:
   Yep, we can always make this tunable later if we care.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-29 Thread GitBox
ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store 
becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/843#discussion_r288847579
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -1225,130 +1301,292 @@ private boolean innerRename(Path source, Path dest)
   }
 }
 
-// If we have a MetadataStore, track deletions/creations.
-Collection srcPaths = null;
-List dstMetas = null;
-if (hasMetadataStore()) {
-  srcPaths = new HashSet<>(); // srcPaths need fast look up before put
-  dstMetas = new ArrayList<>();
-}
-// TODO S3Guard HADOOP-13761: retries when source paths are not visible yet
+// Validation completed: time to begin the operation.
+// The store-specific rename operation is used to keep the store
+// to date with the in-progress operation.
+// for the null store, these are all no-ops.
+final RenameTracker renameTracker =
+metadataStore.initiateRenameOperation(
+createStoreContext(),
+src, srcStatus, dest);
+final AtomicLong bytesCopied = new AtomicLong();
+int renameParallelLimit = RENAME_PARALLEL_LIMIT;
+final List> activeCopies =
+new ArrayList<>(renameParallelLimit);
+// aggregate operation to wait for the copies to complete then reset
+// the list.
+final FunctionsRaisingIOE.FunctionRaisingIOE
+completeActiveCopies = (String reason) -> {
+  LOG.debug("Waiting for {} active copies to complete: {}",
+  activeCopies.size(), reason);
+  waitForCompletion(activeCopies);
+  activeCopies.clear();
+  return null;
+};
+
 // TODO S3Guard: performance: mark destination dirs as authoritative
 
 // Ok! Time to start
-if (srcStatus.isFile()) {
-  LOG.debug("rename: renaming file {} to {}", src, dst);
-  long length = srcStatus.getLen();
-  S3ObjectAttributes objectAttributes =
-  createObjectAttributes(srcStatus.getPath(),
-  srcStatus.getETag(), srcStatus.getVersionId());
-  S3AReadOpContext readContext = createReadContext(srcStatus, inputPolicy,
-  changeDetectionPolicy, readAhead);
-  if (dstStatus != null && dstStatus.isDirectory()) {
-String newDstKey = maybeAddTrailingSlash(dstKey);
-String filename =
-srcKey.substring(pathToKey(src.getParent()).length()+1);
-newDstKey = newDstKey + filename;
-CopyResult copyResult = copyFile(srcKey, newDstKey, length,
-objectAttributes, readContext);
-S3Guard.addMoveFile(metadataStore, srcPaths, dstMetas, src,
-keyToQualifiedPath(newDstKey), length, getDefaultBlockSize(dst),
-username, copyResult.getETag(), copyResult.getVersionId());
+try {
+  if (srcStatus.isFile()) {
+// the source is a file.
+Path copyDestinationPath = dst;
+String copyDestinationKey = dstKey;
+S3ObjectAttributes sourceAttributes =
+createObjectAttributes(srcStatus);
+S3AReadOpContext readContext = createReadContext(srcStatus, 
inputPolicy,
+changeDetectionPolicy, readAhead);
+if (dstStatus != null && dstStatus.isDirectory()) {
+  // destination is a directory: build the final destination underneath
+  String newDstKey = maybeAddTrailingSlash(dstKey);
+  String filename =
+  srcKey.substring(pathToKey(src.getParent()).length() + 1);
+  newDstKey = newDstKey + filename;
+  copyDestinationKey = newDstKey;
+  copyDestinationPath = keyToQualifiedPath(newDstKey);
+}
+// destination either does not exist or is a file to overwrite.
+LOG.debug("rename: renaming file {} to {}", src, copyDestinationPath);
+copySourceAndUpdateTracker(renameTracker,
+src,
+srcKey,
+sourceAttributes,
+readContext,
+copyDestinationPath,
+copyDestinationKey,
+false);
+bytesCopied.addAndGet(srcStatus.getLen());
+ // delete the source
+deleteObjectAtPath(src, srcKey, true);
+// and update the tracker
+renameTracker.sourceObjectsDeleted(Lists.newArrayList(src));
   } else {
-CopyResult copyResult = copyFile(srcKey, dstKey, srcStatus.getLen(),
-objectAttributes, readContext);
-S3Guard.addMoveFile(metadataStore, srcPaths, dstMetas, src, dst,
-length, getDefaultBlockSize(dst), username,
-copyResult.getETag(), copyResult.getVersionId());
-  }
-  innerDelete(srcStatus, false);
-} else {
-  LOG.debug("rename: renaming directory {} to {}", src, dst);
-
-  // This is a directory to directory copy
-  dstKey = maybeAddTrailingSlash(dstKey);
-  srcKey = maybeAddTrailingSlash(srcKey);
+

[GitHub] [hadoop] ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-29 Thread GitBox
ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store 
becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/843#discussion_r288854508
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -1225,130 +1301,292 @@ private boolean innerRename(Path source, Path dest)
   }
 }
 
-// If we have a MetadataStore, track deletions/creations.
-Collection srcPaths = null;
-List dstMetas = null;
-if (hasMetadataStore()) {
-  srcPaths = new HashSet<>(); // srcPaths need fast look up before put
-  dstMetas = new ArrayList<>();
-}
-// TODO S3Guard HADOOP-13761: retries when source paths are not visible yet
+// Validation completed: time to begin the operation.
+// The store-specific rename operation is used to keep the store
+// to date with the in-progress operation.
+// for the null store, these are all no-ops.
+final RenameTracker renameTracker =
+metadataStore.initiateRenameOperation(
+createStoreContext(),
+src, srcStatus, dest);
+final AtomicLong bytesCopied = new AtomicLong();
+int renameParallelLimit = RENAME_PARALLEL_LIMIT;
+final List> activeCopies =
+new ArrayList<>(renameParallelLimit);
+// aggregate operation to wait for the copies to complete then reset
+// the list.
+final FunctionsRaisingIOE.FunctionRaisingIOE
+completeActiveCopies = (String reason) -> {
+  LOG.debug("Waiting for {} active copies to complete: {}",
+  activeCopies.size(), reason);
+  waitForCompletion(activeCopies);
+  activeCopies.clear();
+  return null;
+};
+
 // TODO S3Guard: performance: mark destination dirs as authoritative
 
 // Ok! Time to start
-if (srcStatus.isFile()) {
-  LOG.debug("rename: renaming file {} to {}", src, dst);
-  long length = srcStatus.getLen();
-  S3ObjectAttributes objectAttributes =
-  createObjectAttributes(srcStatus.getPath(),
-  srcStatus.getETag(), srcStatus.getVersionId());
-  S3AReadOpContext readContext = createReadContext(srcStatus, inputPolicy,
-  changeDetectionPolicy, readAhead);
-  if (dstStatus != null && dstStatus.isDirectory()) {
-String newDstKey = maybeAddTrailingSlash(dstKey);
-String filename =
-srcKey.substring(pathToKey(src.getParent()).length()+1);
-newDstKey = newDstKey + filename;
-CopyResult copyResult = copyFile(srcKey, newDstKey, length,
-objectAttributes, readContext);
-S3Guard.addMoveFile(metadataStore, srcPaths, dstMetas, src,
-keyToQualifiedPath(newDstKey), length, getDefaultBlockSize(dst),
-username, copyResult.getETag(), copyResult.getVersionId());
+try {
+  if (srcStatus.isFile()) {
+// the source is a file.
+Path copyDestinationPath = dst;
+String copyDestinationKey = dstKey;
+S3ObjectAttributes sourceAttributes =
+createObjectAttributes(srcStatus);
+S3AReadOpContext readContext = createReadContext(srcStatus, 
inputPolicy,
+changeDetectionPolicy, readAhead);
+if (dstStatus != null && dstStatus.isDirectory()) {
+  // destination is a directory: build the final destination underneath
+  String newDstKey = maybeAddTrailingSlash(dstKey);
+  String filename =
+  srcKey.substring(pathToKey(src.getParent()).length() + 1);
+  newDstKey = newDstKey + filename;
+  copyDestinationKey = newDstKey;
+  copyDestinationPath = keyToQualifiedPath(newDstKey);
+}
+// destination either does not exist or is a file to overwrite.
+LOG.debug("rename: renaming file {} to {}", src, copyDestinationPath);
+copySourceAndUpdateTracker(renameTracker,
+src,
+srcKey,
+sourceAttributes,
+readContext,
+copyDestinationPath,
+copyDestinationKey,
+false);
+bytesCopied.addAndGet(srcStatus.getLen());
+ // delete the source
+deleteObjectAtPath(src, srcKey, true);
+// and update the tracker
+renameTracker.sourceObjectsDeleted(Lists.newArrayList(src));
   } else {
-CopyResult copyResult = copyFile(srcKey, dstKey, srcStatus.getLen(),
-objectAttributes, readContext);
-S3Guard.addMoveFile(metadataStore, srcPaths, dstMetas, src, dst,
-length, getDefaultBlockSize(dst), username,
-copyResult.getETag(), copyResult.getVersionId());
-  }
-  innerDelete(srcStatus, false);
-} else {
-  LOG.debug("rename: renaming directory {} to {}", src, dst);
-
-  // This is a directory to directory copy
-  dstKey = maybeAddTrailingSlash(dstKey);
-  srcKey = maybeAddTrailingSlash(srcKey);
+

[GitHub] [hadoop] ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-29 Thread GitBox
ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store 
becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/843#discussion_r288854779
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -2016,22 +2264,116 @@ void removeKeys(List 
keysToDelete,
 for (DeleteObjectsRequest.KeyVersion keyVersion : keysToDelete) {
   blockRootDelete(keyVersion.getKey());
 }
-if (enableMultiObjectsDelete) {
-  deleteObjects(new DeleteObjectsRequest(bucket)
-  .withKeys(keysToDelete)
-  .withQuiet(true));
-} else {
-  for (DeleteObjectsRequest.KeyVersion keyVersion : keysToDelete) {
-deleteObject(keyVersion.getKey());
+try {
+  if (enableMultiObjectsDelete) {
+deleteObjects(new DeleteObjectsRequest(bucket)
+.withKeys(keysToDelete)
+.withQuiet(true));
+  } else {
+for (DeleteObjectsRequest.KeyVersion keyVersion : keysToDelete) {
+  deleteObject(keyVersion.getKey());
+}
   }
+} catch (MultiObjectDeleteException ex) {
+  // partial delete.
+  // Update the stats with the count of the actual number of successful
+  // deletions.
+  int rejected = ex.getErrors().size();
+  noteDeleted(keysToDelete.size() - rejected, deleteFakeDir);
+  incrementStatistic(FILES_DELETE_REJECTED, rejected);
+  throw ex;
 }
+noteDeleted(keysToDelete.size(), deleteFakeDir);
+  }
+
+  /**
+   * Note the deletion of files or fake directories deleted.
+   * @param count count of keys deleted.
+   * @param deleteFakeDir are the deletions fake directories?
+   */
+  private void noteDeleted(final int count, final boolean deleteFakeDir) {
 if (!deleteFakeDir) {
-  instrumentation.fileDeleted(keysToDelete.size());
+  instrumentation.fileDeleted(count);
 } else {
-  instrumentation.fakeDirsDeleted(keysToDelete.size());
+  instrumentation.fakeDirsDeleted(count);
 }
-if (clearKeys) {
-  keysToDelete.clear();
+  }
+
+  /**
+   * Invoke {@link #removeKeysS3(List, boolean)} with handling of
+   * {@code MultiObjectDeleteException} in which S3Guard is updated with all
+   * deleted entries, before the exception is rethrown.
+   *
+   * If an exception is not raised. the metastore is not updated.
+   * @param keysToDelete collection of keys to delete on the s3-backend.
+   *if empty, no request is made of the object store.
+   * @param deleteFakeDir indicates whether this is for deleting fake dirs
+   * @throws InvalidRequestException if the request was rejected due to
+   * a mistaken attempt to delete the root directory.
+   * @throws MultiObjectDeleteException one or more of the keys could not
+   * be deleted in a multiple object delete operation.
+   * @throws AmazonClientException amazon-layer failure.
+   * @throws IOException other IO Exception.
+   */
+  @VisibleForTesting
+  @Retries.RetryMixed
+  void removeKeys(
+  final List keysToDelete,
+  final boolean deleteFakeDir)
+  throws MultiObjectDeleteException, AmazonClientException,
+  IOException {
+removeKeys(keysToDelete, deleteFakeDir, new ArrayList<>());
+  }
+
+  /**
+   * Invoke {@link #removeKeysS3(List, boolean)} with handling of
+   * {@code MultiObjectDeleteException} in which S3Guard is updated with all
+   * deleted entries, before the exception is rethrown.
+   *
+   * @param keysToDelete collection of keys to delete on the s3-backend.
+   *if empty, no request is made of the object store.
+   * @param deleteFakeDir indicates whether this is for deleting fake dirs
+   * @param undeletedObjectsOnFailure List which will be built up of all
+   * files that were not deleted. This happens even as an exception
+   * is raised.
+   * @throws InvalidRequestException if the request was rejected due to
+   * a mistaken attempt to delete the root directory.
+   * @throws MultiObjectDeleteException one or more of the keys could not
+   * be deleted in a multiple object delete operation.
+   * @throws AmazonClientException amazon-layer failure.
+   * @throws IOException other IO Exception.
+   */
+  @VisibleForTesting
+  @Retries.RetryMixed
+  void removeKeys(
 
 Review comment:
   Is there a test that exercises the partial failure logic here? (I may answer 
this as I get further in the review)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[GitHub] [hadoop] ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-29 Thread GitBox
ajfabbri commented on a change in pull request #843: HADOOP-15183 S3Guard store 
becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/843#discussion_r288855378
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java
 ##
 @@ -207,6 +209,12 @@
   "S3Guard metadata store put one metadata path latency"),
   S3GUARD_METADATASTORE_INITIALIZATION("s3guard_metadatastore_initialization",
   "S3Guard metadata store initialization times"),
+  S3GUARD_METADATASTORE_RECORD_READS(
+  "s3guard_metadatastore_record_reads",
+  "S3Guard metadata store records read"),
+  S3GUARD_METADATASTORE_RECORD_WRITES(
+  "s3guard_metadatastore_record_writes",
+  "S3Guard metadata store records written"),
 
 Review comment:
   Seems like this could be interesting for comparing different rename 
trackers? If so I'm curious what sort of stats / deltas you get with the new  
logic.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on issue #850: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497196689
 
 
   Last commit change is moved all the classes to package named bucket under 
request/response.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on issue #850: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497193840
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#issuecomment-497193667
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 43 | Maven dependency ordering for branch |
   | +1 | mvninstall | 568 | trunk passed |
   | +1 | compile | 266 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 821 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | trunk passed |
   | 0 | spotbugs | 293 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 476 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 491 | the patch passed |
   | +1 | compile | 274 | the patch passed |
   | +1 | cc | 274 | the patch passed |
   | +1 | javac | 274 | the patch passed |
   | +1 | checkstyle | 74 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 609 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | the patch passed |
   | +1 | findbugs | 502 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 233 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1316 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 6310 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/847 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux fb7108963f48 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9ad7cad |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/3/testReport/ |
   | Max. process+thread count | 4281 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16334) Fix yetus-wrapper not working when HADOOP_YETUS_VERSION greater or equal than 0.9.0

2019-05-29 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16334:
---
Status: Patch Available  (was: Open)

> Fix yetus-wrapper not working when HADOOP_YETUS_VERSION greater or equal than 
> 0.9.0
> ---
>
> Key: HADOOP-16334
> URL: https://issues.apache.org/jira/browse/HADOOP-16334
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: yetus
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] infraio opened a new pull request #873: HADOOP-16337 Start the CLI MiniCluster failed because the default for…

2019-05-29 Thread GitBox
infraio opened a new pull request #873: HADOOP-16337 Start the CLI MiniCluster 
failed because the default for…
URL: https://github.com/apache/hadoop/pull/873
 
 
   …mat option is false


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #872: HDDS-1581. Atleast one of the metadata dir config property must be tagged as REQUIRED

2019-05-29 Thread GitBox
dineshchitlangia commented on issue #872: HDDS-1581. Atleast one of the 
metadata dir config property must be tagged as REQUIRED
URL: https://github.com/apache/hadoop/pull/872#issuecomment-497184181
 
 
   @xiaoyuyao Please help to review/commit. Thanks for guidance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia opened a new pull request #872: HDDS-1581. Atleast one of the metadata dir config property must be tagged as REQUIRED

2019-05-29 Thread GitBox
dineshchitlangia opened a new pull request #872: HDDS-1581. Atleast one of the 
metadata dir config property must be tagged as REQUIRED
URL: https://github.com/apache/hadoop/pull/872
 
 
   Added REQUIRED tag on fallback property and updated description of other 
configs as needed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16302) Fix typo on Hadoop Site Help dropdown menu

2019-05-29 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851480#comment-16851480
 ] 

Dinesh Chitlangia commented on HADOOP-16302:


[~elek] :) Thanks for checking! I was wondering the same about a sponsorshop!

> Fix typo on Hadoop Site Help dropdown menu
> --
>
> Key: HADOOP-16302
> URL: https://issues.apache.org/jira/browse/HADOOP-16302
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
>Affects Versions: asf-site
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
> Attachments: Screen Shot 2019-05-07 at 11.57.01 PM.png
>
>
> On hadoop.apache.org the Help tab on top menu bar has Sponsorship spelt as 
> Sponsorshop.
> This jira aims to fix this typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16337) Start the CLI MiniCluster failed because the default format option is false

2019-05-29 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HADOOP-16337:

Affects Version/s: 2.8.4
   2.9.2
   2.8.5

> Start the CLI MiniCluster failed because the default format option is false
> ---
>
> Key: HADOOP-16337
> URL: https://issues.apache.org/jira/browse/HADOOP-16337
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.4, 2.9.2, 2.8.5
>Reporter: Guanghao Zhang
>Priority: Minor
>
> After HADOOP-14970, need to add option -format when start the CLI 
> MiniCluster. But the document about CLIMiniCluster didn't updated. Will get a 
> error when follow the document to start CLI MiniCluster.
> {code:java}
> 19/05/30 10:27:19 WARN common.Storage: Storage directory 
> /home/hao/soft/hadoop-2.8.4/build/test/data/dfs/name1 does not exist
> 19/05/30 10:27:19 WARN namenode.FSNamesystem: Encountered exception loading 
> fsimage
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory 
> /home/hao/soft/hadoop-2.8.4/build/test/data/dfs/name1 is in an inconsistent 
> state: storage directory does not exist or is not accessible.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:369)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:220)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1044)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:707)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:635)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:696)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:906)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1162)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1037)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:830)
> at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:154)
> at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:129)
> at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:316)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
> at org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16337) Start the CLI MiniCluster failed because the default format option is false

2019-05-29 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HADOOP-16337:
---

 Summary: Start the CLI MiniCluster failed because the default 
format option is false
 Key: HADOOP-16337
 URL: https://issues.apache.org/jira/browse/HADOOP-16337
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Guanghao Zhang


After HADOOP-14970, need to add option -format when start the CLI MiniCluster. 
But the document about CLIMiniCluster didn't updated. Will get a error when 
follow the document to start CLI MiniCluster.
{code:java}
19/05/30 10:27:19 WARN common.Storage: Storage directory 
/home/hao/soft/hadoop-2.8.4/build/test/data/dfs/name1 does not exist
19/05/30 10:27:19 WARN namenode.FSNamesystem: Encountered exception loading 
fsimage
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory 
/home/hao/soft/hadoop-2.8.4/build/test/data/dfs/name1 is in an inconsistent 
state: storage directory does not exist or is not accessible.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:369)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:220)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1044)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:707)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:635)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:696)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:906)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1162)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1037)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:830)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:154)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:129)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:316)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
at org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #850: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497172980
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 515 | trunk passed |
   | +1 | compile | 279 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 824 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 134 | trunk passed |
   | 0 | spotbugs | 292 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 477 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 35 | Maven dependency ordering for patch |
   | +1 | mvninstall | 474 | the patch passed |
   | +1 | compile | 259 | the patch passed |
   | +1 | cc | 259 | the patch passed |
   | +1 | javac | 259 | the patch passed |
   | -0 | checkstyle | 47 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 673 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 84 | hadoop-ozone generated 3 new + 5 unchanged - 0 fixed = 
8 total (was 5) |
   | +1 | findbugs | 496 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 243 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1283 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6371 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux 9e24913aea1c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9ad7cad |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/testReport/ |
   | Max. process+thread count | 4580 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16334) Fix yetus-wrapper not working when HADOOP_YETUS_VERSION greater or equal than 0.9.0

2019-05-29 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851457#comment-16851457
 ] 

Wanqiang Ji commented on HADOOP-16334:
--

Hi [~aajisaka], can u take some time to help review?

> Fix yetus-wrapper not working when HADOOP_YETUS_VERSION greater or equal than 
> 0.9.0
> ---
>
> Key: HADOOP-16334
> URL: https://issues.apache.org/jira/browse/HADOOP-16334
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: yetus
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #850: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497172105
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 535 | trunk passed |
   | +1 | compile | 269 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 838 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 136 | trunk passed |
   | 0 | spotbugs | 297 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 491 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 462 | the patch passed |
   | +1 | compile | 266 | the patch passed |
   | +1 | cc | 266 | the patch passed |
   | +1 | javac | 266 | the patch passed |
   | +1 | checkstyle | 76 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 635 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 75 | hadoop-ozone generated 3 new + 5 unchanged - 0 fixed = 
8 total (was 5) |
   | +1 | findbugs | 487 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 232 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1506 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 6491 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux 07534816c8d6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9ad7cad |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/10/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/10/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/10/testReport/ |
   | Max. process+thread count | 3661 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
xiaoyuyao commented on issue #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#issuecomment-497169505
 
 
   @ajayydv, half of the test failures are related. Can you fix them? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #871: HDDS-1579. Create OMDoubleBuffer metrics.

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #871: HDDS-1579. Create OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#issuecomment-497164591
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 544 | trunk passed |
   | +1 | compile | 250 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 825 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | trunk passed |
   | 0 | spotbugs | 299 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 480 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 481 | the patch passed |
   | +1 | compile | 261 | the patch passed |
   | +1 | javac | 261 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 634 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | the patch passed |
   | +1 | findbugs | 513 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 227 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1245 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 64 | The patch does not generate ASF License warnings. |
   | | | 6173 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/871 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 92e7bb7a36d1 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9ad7cad |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/2/testReport/ |
   | Max. process+thread count | 5140 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #871: HDDS-1579. Create OMDoubleBuffer metrics.

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #871: HDDS-1579. Create OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#issuecomment-497162182
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 554 | trunk passed |
   | +1 | compile | 285 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 868 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 282 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 473 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 481 | the patch passed |
   | +1 | compile | 264 | the patch passed |
   | +1 | javac | 264 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 663 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 137 | the patch passed |
   | +1 | findbugs | 473 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 226 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1174 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6145 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/871 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d480a8fe46b5 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9ad7cad |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/1/testReport/ |
   | Max. process+thread count | 5043 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814519
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java
 ##
 @@ -390,7 +405,11 @@ private OMBucketCreateResponse createBucket(String 
volumeName,
 OmBucketInfo omBucketInfo =
 OmBucketInfo.newBuilder().setVolumeName(volumeName)
 .setBucketName(bucketName).setCreationTime(Time.now()).build();
-return new OMBucketCreateResponse(omBucketInfo);
+return new OMBucketCreateResponse(omBucketInfo, OMResponse.newBuilder()
 
 Review comment:
   This is added based on Arpit's comment in HDDS-1512. As we want to test OM 
Double Buffer Implementation without actual OM Responses too. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on issue #850: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497153823
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813544
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
 ##
 @@ -97,6 +112,101 @@ private static long nextCallId() {
 return CALL_ID_COUNTER.getAndIncrement() & Long.MAX_VALUE;
   }
 
+  /**
+   * Submit request to Ratis server.
+   * @param omRequest
+   * @return OMResponse - response returned to the client.
+   * @throws ServiceException
+   */
+  public OMResponse submitRequest(OMRequest omRequest) throws ServiceException 
{
+RaftClientRequest raftClientRequest =
+createWriteRaftClientRequest(omRequest);
+RaftClientReply raftClientReply;
+try {
+  raftClientReply = server.submitClientRequestAsync(raftClientRequest)
+  .get();
+} catch (Exception ex) {
+  throw new ServiceException(ex.getMessage(), ex);
+}
+
+return processReply(omRequest, raftClientReply);
+  }
+
+  /**
+   * Create Write RaftClient request from OMRequest.
+   * @param omRequest
+   * @return
+   */
+  private RaftClientRequest createWriteRaftClientRequest(OMRequest omRequest) {
+return new RaftClientRequest(clientId, server.getId(), raftGroupId,
+nextCallId(),
+Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
+RaftClientRequest.writeRequestType(), null);
+  }
+
+  /**
+   * Process the raftClientReply and return OMResponse.
+   * @param omRequest
+   * @param reply
+   * @return
+   * @throws ServiceException
+   */
+  private OMResponse processReply(OMRequest omRequest, RaftClientReply reply)
+  throws ServiceException {
+// NotLeader exception is thrown only when the raft server to which the
+// request is submitted is not the leader. This can happen first time
+// when client is submitting request to OM.
+NotLeaderException notLeaderException = reply.getNotLeaderException();
+if (notLeaderException != null) {
+  throw new ServiceException(notLeaderException);
+}
+StateMachineException stateMachineException =
+reply.getStateMachineException();
+if (stateMachineException != null) {
+  OMResponse.Builder omResponse = OMResponse.newBuilder();
+  omResponse.setCmdType(omRequest.getCmdType());
+  omResponse.setSuccess(false);
+  omResponse.setMessage(stateMachineException.getCause().getMessage());
+  omResponse.setStatus(parseErrorStatus(
+  stateMachineException.getCause().getMessage()));
+  return omResponse.build();
+}
+
+try {
+  return OMRatisHelper.getOMResponseFromRaftClientReply(reply);
+} catch (InvalidProtocolBufferException ex) {
+  if (ex.getMessage() != null) {
+throw new ServiceException(ex.getMessage(), ex);
+  } else {
+throw new ServiceException(ex);
+  }
+}
+
+// TODO: Still need to handle RaftRetry failure exception and
+//  NotReplicated exception.
 
 Review comment:
   Currently using ratisClient there is a TODO for RaftRetry failure exception, 
and I don't see anything done for handling NotReplicatedException.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813544
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
 ##
 @@ -97,6 +112,101 @@ private static long nextCallId() {
 return CALL_ID_COUNTER.getAndIncrement() & Long.MAX_VALUE;
   }
 
+  /**
+   * Submit request to Ratis server.
+   * @param omRequest
+   * @return OMResponse - response returned to the client.
+   * @throws ServiceException
+   */
+  public OMResponse submitRequest(OMRequest omRequest) throws ServiceException 
{
+RaftClientRequest raftClientRequest =
+createWriteRaftClientRequest(omRequest);
+RaftClientReply raftClientReply;
+try {
+  raftClientReply = server.submitClientRequestAsync(raftClientRequest)
+  .get();
+} catch (Exception ex) {
+  throw new ServiceException(ex.getMessage(), ex);
+}
+
+return processReply(omRequest, raftClientReply);
+  }
+
+  /**
+   * Create Write RaftClient request from OMRequest.
+   * @param omRequest
+   * @return
+   */
+  private RaftClientRequest createWriteRaftClientRequest(OMRequest omRequest) {
+return new RaftClientRequest(clientId, server.getId(), raftGroupId,
+nextCallId(),
+Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
+RaftClientRequest.writeRequestType(), null);
+  }
+
+  /**
+   * Process the raftClientReply and return OMResponse.
+   * @param omRequest
+   * @param reply
+   * @return
+   * @throws ServiceException
+   */
+  private OMResponse processReply(OMRequest omRequest, RaftClientReply reply)
+  throws ServiceException {
+// NotLeader exception is thrown only when the raft server to which the
+// request is submitted is not the leader. This can happen first time
+// when client is submitting request to OM.
+NotLeaderException notLeaderException = reply.getNotLeaderException();
+if (notLeaderException != null) {
+  throw new ServiceException(notLeaderException);
+}
+StateMachineException stateMachineException =
+reply.getStateMachineException();
+if (stateMachineException != null) {
+  OMResponse.Builder omResponse = OMResponse.newBuilder();
+  omResponse.setCmdType(omRequest.getCmdType());
+  omResponse.setSuccess(false);
+  omResponse.setMessage(stateMachineException.getCause().getMessage());
+  omResponse.setStatus(parseErrorStatus(
+  stateMachineException.getCause().getMessage()));
+  return omResponse.build();
+}
+
+try {
+  return OMRatisHelper.getOMResponseFromRaftClientReply(reply);
+} catch (InvalidProtocolBufferException ex) {
+  if (ex.getMessage() != null) {
+throw new ServiceException(ex.getMessage(), ex);
+  } else {
+throw new ServiceException(ex);
+  }
+}
+
+// TODO: Still need to handle RaftRetry failure exception and
+//  NotReplicated exception.
 
 Review comment:
   Currently using ratisClient there is a TODO for RaftRetry failure exception, 
and I don't see anything being done for handling NotReplicatedException.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on issue #850: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497152646
 
 
   Thank You @hanishakoneru for the review.
   I have addressed the review comments. For some of the questions, replied my 
answers.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814908
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
 ##
 @@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.util.Time;
+
+import java.util.UUID;
+
+/**
+ * Helper class to test OMClientRequest classes.
+ */
+public final class TestOMRequestUtils {
+
+  private TestOMRequestUtils() {
+//Do nothing
+  }
+  public static void addEntryToDB(String volumeName, String bucketName,
+  OMMetadataManager omMetadataManager)
+  throws Exception {
+
+createVolumeEntryToDDB(volumeName, bucketName, omMetadataManager);
+
+OmBucketInfo omBucketInfo =
+OmBucketInfo.newBuilder().setVolumeName(volumeName)
+.setBucketName(bucketName).setCreationTime(Time.now()).build();
+
+omMetadataManager.getBucketTable().put(
+omMetadataManager.getBucketKey(volumeName, bucketName), omBucketInfo);
+  }
+
+  public static void createVolumeEntryToDDB(String volumeName,
+  String bucketName, OMMetadataManager omMetadataManager)
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814783
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMBucketSetPropertyRequest.java
 ##
 @@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.
+BucketArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetBucketPropertyRequest;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import java.util.UUID;
+
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests OMBucketSetPropertyRequest class which handles OMSetBucketProperty
+ * request.
+ */
+public class TestOMBucketSetPropertyRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+  }
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+
+OMRequest omRequest = createSeBucketPropertyRequest(volumeName,
+bucketName, true);
+
+OMBucketSetPropertyRequest omBucketSetPropertyRequest =
+new OMBucketSetPropertyRequest(omRequest);
+
+Assert.assertEquals(omRequest,
+omBucketSetPropertyRequest.preExecute(ozoneManager));
+  }
+
+  @Test
+  public void testValidateAndUpdateCache() throws Exception {
+
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+
+OMRequest omRequest = createSeBucketPropertyRequest(volumeName,
+bucketName, true);
+
+// Create with default BucketInfo values
+TestOMRequestUtils.addEntryToDB(volumeName, bucketName, omMetadataManager);
+
+OMBucketSetPropertyRequest omBucketSetPropertyRequest =
+new OMBucketSetPropertyRequest(omRequest);
+
+OMClientResponse omClientResponse =
+omBucketSetPropertyRequest.validateAndUpdateCache(ozoneManager, 1);
+
+Assert.assertEquals(true,
+omMetadataManager.getBucketTable().get(
+omMetadataManager.getBucketKey(volumeName, bucketName))
+.getIsVersionEnabled());
+
+Assert.assertEquals(OzoneManagerProtocolProtos.Status.OK,
+omClientResponse.getOMResponse().getStatus());
+
+  }
+
+  @Test
+  public void testValidateAndUpdateCacheFails() throws Exception {
+
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+
+OMRequest omRequest = createSeBucketPropertyRequest(volumeName,
+bucketName, true);
+
+
+OMBucketSetPropertyRequest 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814660
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMBucketCreateRequest.java
 ##
 @@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.StorageTypeProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.util.Time;
+
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests OMBucketCreateRequest class, which handles CreateBucket request.
+ */
+public class TestOMBucketCreateRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+  }
+
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+doPreExecute(volumeName, bucketName);
+  }
+
+
+  @Test
+  public void testValidateAndUpdateCache() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMBucketCreateRequest omBucketCreateRequest = doPreExecute(volumeName,
+bucketName);
+
+doValidateAndUpdateCache(volumeName, bucketName,
+omBucketCreateRequest.getOmRequest());
+
+  }
+
+  @Test
+  public void testValidateAndUpdateCacheWithNoVolume() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMRequest originalRequest = createBucketRequest(bucketName, volumeName,
+false, StorageTypeProto.SSD);
+
+OMBucketCreateRequest omBucketCreateRequest =
+new OMBucketCreateRequest(originalRequest);
+
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+
+// As we have not still called validateAndUpdateCache, get() should
+// return null.
+
+Assert.assertNull(omMetadataManager.getBucketTable().get(bucketKey));
+
+OMClientResponse omClientResponse =
+omBucketCreateRequest.validateAndUpdateCache(ozoneManager, 1);
+
+OMResponse omResponse = omClientResponse.getOMResponse();
+Assert.assertNotNull(omResponse.getCreateBucketResponse());
+

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814608
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMBucketCreateRequest.java
 ##
 @@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.StorageTypeProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.util.Time;
+
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests OMBucketCreateRequest class, which handles CreateBucket request.
+ */
+public class TestOMBucketCreateRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+  }
+
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+doPreExecute(volumeName, bucketName);
+  }
+
+
+  @Test
+  public void testValidateAndUpdateCache() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMBucketCreateRequest omBucketCreateRequest = doPreExecute(volumeName,
+bucketName);
+
+doValidateAndUpdateCache(volumeName, bucketName,
+omBucketCreateRequest.getOmRequest());
+
+  }
+
+  @Test
+  public void testValidateAndUpdateCacheWithNoVolume() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMRequest originalRequest = createBucketRequest(bucketName, volumeName,
+false, StorageTypeProto.SSD);
+
+OMBucketCreateRequest omBucketCreateRequest =
+new OMBucketCreateRequest(originalRequest);
+
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+
+// As we have not still called validateAndUpdateCache, get() should
+// return null.
+
+Assert.assertNull(omMetadataManager.getBucketTable().get(bucketKey));
+
+OMClientResponse omClientResponse =
+omBucketCreateRequest.validateAndUpdateCache(ozoneManager, 1);
+
+OMResponse omResponse = omClientResponse.getOMResponse();
+Assert.assertNotNull(omResponse.getCreateBucketResponse());
+

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814519
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java
 ##
 @@ -390,7 +405,11 @@ private OMBucketCreateResponse createBucket(String 
volumeName,
 OmBucketInfo omBucketInfo =
 OmBucketInfo.newBuilder().setVolumeName(volumeName)
 .setBucketName(bucketName).setCreationTime(Time.now()).build();
-return new OMBucketCreateResponse(omBucketInfo);
+return new OMBucketCreateResponse(omBucketInfo, OMResponse.newBuilder()
 
 Review comment:
   This is added based on Arpit's comment in HDDS-1512. As we want to test OM 
Double Buffer Implementation without actual OM Responses. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814215
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMBucketSetPropertyRequest.java
 ##
 @@ -0,0 +1,187 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.io.IOException;
+import java.util.List;
+
+import com.google.common.base.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.KeyValueUtil;
+import org.apache.hadoop.ozone.om.helpers.OmBucketArgs;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.response.OMBucketSetPropertyResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle SetBucketProperty Request.
+ */
+public class OMBucketSetPropertyRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMBucketSetPropertyRequest.class);
+
+  public OMBucketSetPropertyRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+return getOmRequest();
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+OMMetrics omMetrics = ozoneManager.getOmMetrics();
+
+// This will never be null, on a real Ozone cluster. For tests this might
+// be null. using mockito, to set omMetrics object, but still getting
+// null. For now added this not null check.
+if (omMetrics != null) {
+  omMetrics.incNumBucketUpdates();
+}
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814271
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/package-info.java
 ##
 @@ -0,0 +1,21 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * This package contains classes for handling OMRequest's.
+ */
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814047
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMBucketCreateRequest.java
 ##
 @@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.crypto.CipherSuite;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketEncryptionInfoProto;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CryptoProtocolVersionProto.ENCRYPTION_ZONES;
+
+/**
+ * Handles CreateBucket Request.
+ */
+public class OMBucketCreateRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMBucketCreateRequest.class);
+
+  public OMBucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+
+// Get original request.
+CreateBucketRequest createBucketRequest =
+getOmRequest().getCreateBucketRequest();
+BucketInfo bucketInfo = createBucketRequest.getBucketInfo();
+
+// Get KMS provider.
+KeyProviderCryptoExtension kmsProvider =
+ozoneManager.getKmsProvider();
+
+// Create new Bucket request with new bucket info.
+CreateBucketRequest.Builder newCreateBucketRequest =
+createBucketRequest.toBuilder();
+
+BucketInfo.Builder newBucketInfo = bucketInfo.toBuilder();
+
+newCreateBucketRequest.setBucketInfo(
+newBucketInfo.setCreationTime(Time.now()));
+
+if (bucketInfo.hasBeinfo()) {
+  newBucketInfo.setBeinfo(getBeinfo(kmsProvider, bucketInfo));
+}
+
+newCreateBucketRequest.setBucketInfo(newBucketInfo.build());
+return getOmRequest().toBuilder().setCreateBucketRequest(
+newCreateBucketRequest.build()).build();
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumBucketCreates();
+
+OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+BucketInfo bucketInfo = getBucketInfoFromRequest();
+
+String volumeName = bucketInfo.getVolumeName();
+String bucketName = bucketInfo.getBucketName();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.CreateBucket).setStatus(
+

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813640
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
 ##
 @@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis.utils;
+
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.OMBucketCreateRequest;
+import org.apache.hadoop.ozone.om.request.OMBucketDeleteRequest;
+import org.apache.hadoop.ozone.om.request.OMBucketSetPropertyRequest;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Status;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type;
+
+import java.io.IOException;
+
+/**
+ * Utility class used by OzoneManager HA.
+ */
+public final class OzoneManagerRatisUtils {
+
+  private OzoneManagerRatisUtils() {
+  }
+  /**
+   * Create OMClientRequest which enacpsulates the OMRequest.
+   * @param omRequest
+   * @return OMClientRequest
+   * @throws IOException
+   */
+  public static OMClientRequest createClientRequest(OMRequest omRequest)
+  throws IOException {
+Type cmdType = omRequest.getCmdType();
+switch (cmdType) {
+case CreateBucket:
+  return new OMBucketCreateRequest(omRequest);
+case DeleteBucket:
+  return new OMBucketDeleteRequest(omRequest);
+case SetBucketProperty:
+  return new OMBucketSetPropertyRequest(omRequest);
+default:
+  // TODO: will update once all request types are implemented.
+  return null;
+}
+  }
+
+  /**
+   * Convert exception result to {@link OzoneManagerProtocolProtos.Status}.
+   * @param exception
+   * @return {@link OzoneManagerProtocolProtos.Status}
+   */
+  public static Status exceptionToResponseStatus(IOException exception) {
+if (exception instanceof OMException) {
+  return Status.values()[((OMException) exception).getResult().ordinal()];
 
 Review comment:
   Here ordinal gives Position, and from that position finding the value from 
Status.values() (This return array of Status)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813760
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMBucketCreateRequest.java
 ##
 @@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.crypto.CipherSuite;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketEncryptionInfoProto;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CryptoProtocolVersionProto.ENCRYPTION_ZONES;
+
+/**
+ * Handles CreateBucket Request.
+ */
+public class OMBucketCreateRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMBucketCreateRequest.class);
+
+  public OMBucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+
+// Get original request.
+CreateBucketRequest createBucketRequest =
+getOmRequest().getCreateBucketRequest();
+BucketInfo bucketInfo = createBucketRequest.getBucketInfo();
+
+// Get KMS provider.
+KeyProviderCryptoExtension kmsProvider =
+ozoneManager.getKmsProvider();
+
+// Create new Bucket request with new bucket info.
+CreateBucketRequest.Builder newCreateBucketRequest =
+createBucketRequest.toBuilder();
+
+BucketInfo.Builder newBucketInfo = bucketInfo.toBuilder();
+
+newCreateBucketRequest.setBucketInfo(
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813640
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
 ##
 @@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis.utils;
+
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.OMBucketCreateRequest;
+import org.apache.hadoop.ozone.om.request.OMBucketDeleteRequest;
+import org.apache.hadoop.ozone.om.request.OMBucketSetPropertyRequest;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Status;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type;
+
+import java.io.IOException;
+
+/**
+ * Utility class used by OzoneManager HA.
+ */
+public final class OzoneManagerRatisUtils {
+
+  private OzoneManagerRatisUtils() {
+  }
+  /**
+   * Create OMClientRequest which enacpsulates the OMRequest.
+   * @param omRequest
+   * @return OMClientRequest
+   * @throws IOException
+   */
+  public static OMClientRequest createClientRequest(OMRequest omRequest)
+  throws IOException {
+Type cmdType = omRequest.getCmdType();
+switch (cmdType) {
+case CreateBucket:
+  return new OMBucketCreateRequest(omRequest);
+case DeleteBucket:
+  return new OMBucketDeleteRequest(omRequest);
+case SetBucketProperty:
+  return new OMBucketSetPropertyRequest(omRequest);
+default:
+  // TODO: will update once all request types are implemented.
+  return null;
+}
+  }
+
+  /**
+   * Convert exception result to {@link OzoneManagerProtocolProtos.Status}.
+   * @param exception
+   * @return {@link OzoneManagerProtocolProtos.Status}
+   */
+  public static Status exceptionToResponseStatus(IOException exception) {
+if (exception instanceof OMException) {
+  return Status.values()[((OMException) exception).getResult().ordinal()];
 
 Review comment:
   Here ordinal gives Position, and from that position finding the value from 
Status.values()


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813431
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
 ##
 @@ -296,11 +281,7 @@ public OmBucketInfo setBucketProperty(OmBucketArgs args) 
throws IOException {
   bucketInfoBuilder.setCreationTime(oldBucketInfo.getCreationTime());
 
   OmBucketInfo omBucketInfo = bucketInfoBuilder.build();
-
-  if (!isRatisEnabled) {
-commitSetBucketPropertyInfoToDB(omBucketInfo);
-  }
-  return omBucketInfo;
+  commitSetBucketPropertyInfoToDB(omBucketInfo);
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813544
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
 ##
 @@ -97,6 +112,101 @@ private static long nextCallId() {
 return CALL_ID_COUNTER.getAndIncrement() & Long.MAX_VALUE;
   }
 
+  /**
+   * Submit request to Ratis server.
+   * @param omRequest
+   * @return OMResponse - response returned to the client.
+   * @throws ServiceException
+   */
+  public OMResponse submitRequest(OMRequest omRequest) throws ServiceException 
{
+RaftClientRequest raftClientRequest =
+createWriteRaftClientRequest(omRequest);
+RaftClientReply raftClientReply;
+try {
+  raftClientReply = server.submitClientRequestAsync(raftClientRequest)
+  .get();
+} catch (Exception ex) {
+  throw new ServiceException(ex.getMessage(), ex);
+}
+
+return processReply(omRequest, raftClientReply);
+  }
+
+  /**
+   * Create Write RaftClient request from OMRequest.
+   * @param omRequest
+   * @return
+   */
+  private RaftClientRequest createWriteRaftClientRequest(OMRequest omRequest) {
+return new RaftClientRequest(clientId, server.getId(), raftGroupId,
+nextCallId(),
+Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
+RaftClientRequest.writeRequestType(), null);
+  }
+
+  /**
+   * Process the raftClientReply and return OMResponse.
+   * @param omRequest
+   * @param reply
+   * @return
+   * @throws ServiceException
+   */
+  private OMResponse processReply(OMRequest omRequest, RaftClientReply reply)
+  throws ServiceException {
+// NotLeader exception is thrown only when the raft server to which the
+// request is submitted is not the leader. This can happen first time
+// when client is submitting request to OM.
+NotLeaderException notLeaderException = reply.getNotLeaderException();
+if (notLeaderException != null) {
+  throw new ServiceException(notLeaderException);
+}
+StateMachineException stateMachineException =
+reply.getStateMachineException();
+if (stateMachineException != null) {
+  OMResponse.Builder omResponse = OMResponse.newBuilder();
+  omResponse.setCmdType(omRequest.getCmdType());
+  omResponse.setSuccess(false);
+  omResponse.setMessage(stateMachineException.getCause().getMessage());
+  omResponse.setStatus(parseErrorStatus(
+  stateMachineException.getCause().getMessage()));
+  return omResponse.build();
+}
+
+try {
+  return OMRatisHelper.getOMResponseFromRaftClientReply(reply);
+} catch (InvalidProtocolBufferException ex) {
+  if (ex.getMessage() != null) {
+throw new ServiceException(ex.getMessage(), ex);
+  } else {
+throw new ServiceException(ex);
+  }
+}
+
+// TODO: Still need to handle RaftRetry failure exception and
+//  NotReplicated exception.
 
 Review comment:
   Currently using ratisClient there is a TODO for RaftRetry failure exception, 
and I don't see anything is done for handling NotReplicatedException.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288727105
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
 ##
 @@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis.utils;
+
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.OMBucketCreateRequest;
+import org.apache.hadoop.ozone.om.request.OMBucketDeleteRequest;
+import org.apache.hadoop.ozone.om.request.OMBucketSetPropertyRequest;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Status;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type;
+
+import java.io.IOException;
+
+/**
+ * Utility class used by OzoneManager HA.
+ */
+public final class OzoneManagerRatisUtils {
+
+  private OzoneManagerRatisUtils() {
+  }
+  /**
+   * Create OMClientRequest which enacpsulates the OMRequest.
+   * @param omRequest
+   * @return OMClientRequest
+   * @throws IOException
+   */
+  public static OMClientRequest createClientRequest(OMRequest omRequest)
+  throws IOException {
+Type cmdType = omRequest.getCmdType();
+switch (cmdType) {
+case CreateBucket:
+  return new OMBucketCreateRequest(omRequest);
+case DeleteBucket:
+  return new OMBucketDeleteRequest(omRequest);
+case SetBucketProperty:
+  return new OMBucketSetPropertyRequest(omRequest);
+default:
+  // TODO: will update once all request types are implemented.
+  return null;
+}
+  }
+
+  /**
+   * Convert exception result to {@link OzoneManagerProtocolProtos.Status}.
+   * @param exception
+   * @return {@link OzoneManagerProtocolProtos.Status}
+   */
+  public static Status exceptionToResponseStatus(IOException exception) {
+if (exception instanceof OMException) {
+  return Status.values()[((OMException) exception).getResult().ordinal()];
 
 Review comment:
   Should it not be Status.valueOf()? Or does this also give the same result?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288809976
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMBucketSetPropertyRequest.java
 ##
 @@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.
+BucketArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetBucketPropertyRequest;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import java.util.UUID;
+
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests OMBucketSetPropertyRequest class which handles OMSetBucketProperty
+ * request.
+ */
+public class TestOMBucketSetPropertyRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+  }
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+
+OMRequest omRequest = createSeBucketPropertyRequest(volumeName,
+bucketName, true);
+
+OMBucketSetPropertyRequest omBucketSetPropertyRequest =
+new OMBucketSetPropertyRequest(omRequest);
+
+Assert.assertEquals(omRequest,
+omBucketSetPropertyRequest.preExecute(ozoneManager));
+  }
+
+  @Test
+  public void testValidateAndUpdateCache() throws Exception {
+
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+
+OMRequest omRequest = createSeBucketPropertyRequest(volumeName,
+bucketName, true);
+
+// Create with default BucketInfo values
+TestOMRequestUtils.addEntryToDB(volumeName, bucketName, omMetadataManager);
+
+OMBucketSetPropertyRequest omBucketSetPropertyRequest =
+new OMBucketSetPropertyRequest(omRequest);
+
+OMClientResponse omClientResponse =
+omBucketSetPropertyRequest.validateAndUpdateCache(ozoneManager, 1);
+
+Assert.assertEquals(true,
+omMetadataManager.getBucketTable().get(
+omMetadataManager.getBucketKey(volumeName, bucketName))
+.getIsVersionEnabled());
+
+Assert.assertEquals(OzoneManagerProtocolProtos.Status.OK,
+omClientResponse.getOMResponse().getStatus());
+
+  }
+
+  @Test
+  public void testValidateAndUpdateCacheFails() throws Exception {
+
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+
+OMRequest omRequest = createSeBucketPropertyRequest(volumeName,
+bucketName, true);
+
+
+OMBucketSetPropertyRequest 

[GitHub] [hadoop] hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288810202
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
 ##
 @@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.util.Time;
+
+import java.util.UUID;
+
+/**
+ * Helper class to test OMClientRequest classes.
+ */
+public final class TestOMRequestUtils {
+
+  private TestOMRequestUtils() {
+//Do nothing
+  }
+  public static void addEntryToDB(String volumeName, String bucketName,
+  OMMetadataManager omMetadataManager)
+  throws Exception {
+
+createVolumeEntryToDDB(volumeName, bucketName, omMetadataManager);
+
+OmBucketInfo omBucketInfo =
+OmBucketInfo.newBuilder().setVolumeName(volumeName)
+.setBucketName(bucketName).setCreationTime(Time.now()).build();
+
+omMetadataManager.getBucketTable().put(
+omMetadataManager.getBucketKey(volumeName, bucketName), omBucketInfo);
+  }
+
+  public static void createVolumeEntryToDDB(String volumeName,
+  String bucketName, OMMetadataManager omMetadataManager)
 
 Review comment:
   bucketName is not used here.
   Can we rename this method to something like addVolumeToDB?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288800930
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMBucketCreateRequest.java
 ##
 @@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.crypto.CipherSuite;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketEncryptionInfoProto;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CryptoProtocolVersionProto.ENCRYPTION_ZONES;
+
+/**
+ * Handles CreateBucket Request.
+ */
+public class OMBucketCreateRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMBucketCreateRequest.class);
+
+  public OMBucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+
+// Get original request.
+CreateBucketRequest createBucketRequest =
+getOmRequest().getCreateBucketRequest();
+BucketInfo bucketInfo = createBucketRequest.getBucketInfo();
+
+// Get KMS provider.
+KeyProviderCryptoExtension kmsProvider =
+ozoneManager.getKmsProvider();
+
+// Create new Bucket request with new bucket info.
+CreateBucketRequest.Builder newCreateBucketRequest =
+createBucketRequest.toBuilder();
+
+BucketInfo.Builder newBucketInfo = bucketInfo.toBuilder();
+
+newCreateBucketRequest.setBucketInfo(
+newBucketInfo.setCreationTime(Time.now()));
+
+if (bucketInfo.hasBeinfo()) {
+  newBucketInfo.setBeinfo(getBeinfo(kmsProvider, bucketInfo));
+}
+
+newCreateBucketRequest.setBucketInfo(newBucketInfo.build());
+return getOmRequest().toBuilder().setCreateBucketRequest(
+newCreateBucketRequest.build()).build();
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumBucketCreates();
+
+OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+BucketInfo bucketInfo = getBucketInfoFromRequest();
+
+String volumeName = bucketInfo.getVolumeName();
+String bucketName = bucketInfo.getBucketName();
+
+OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.CreateBucket).setStatus(
+

[GitHub] [hadoop] hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288809062
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMBucketCreateRequest.java
 ##
 @@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.StorageTypeProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.util.Time;
+
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests OMBucketCreateRequest class, which handles CreateBucket request.
+ */
+public class TestOMBucketCreateRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+  }
+
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+doPreExecute(volumeName, bucketName);
+  }
+
+
+  @Test
+  public void testValidateAndUpdateCache() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMBucketCreateRequest omBucketCreateRequest = doPreExecute(volumeName,
+bucketName);
+
+doValidateAndUpdateCache(volumeName, bucketName,
+omBucketCreateRequest.getOmRequest());
+
+  }
+
+  @Test
+  public void testValidateAndUpdateCacheWithNoVolume() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMRequest originalRequest = createBucketRequest(bucketName, volumeName,
+false, StorageTypeProto.SSD);
+
+OMBucketCreateRequest omBucketCreateRequest =
+new OMBucketCreateRequest(originalRequest);
+
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+
+// As we have not still called validateAndUpdateCache, get() should
+// return null.
+
+Assert.assertNull(omMetadataManager.getBucketTable().get(bucketKey));
+
+OMClientResponse omClientResponse =
+omBucketCreateRequest.validateAndUpdateCache(ozoneManager, 1);
+
+OMResponse omResponse = omClientResponse.getOMResponse();
+Assert.assertNotNull(omResponse.getCreateBucketResponse());
+

[GitHub] [hadoop] hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288808597
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMBucketCreateRequest.java
 ##
 @@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.StorageTypeProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.util.Time;
+
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests OMBucketCreateRequest class, which handles CreateBucket request.
+ */
+public class TestOMBucketCreateRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+  }
+
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+doPreExecute(volumeName, bucketName);
+  }
+
+
+  @Test
+  public void testValidateAndUpdateCache() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMBucketCreateRequest omBucketCreateRequest = doPreExecute(volumeName,
+bucketName);
+
+doValidateAndUpdateCache(volumeName, bucketName,
+omBucketCreateRequest.getOmRequest());
+
+  }
+
+  @Test
+  public void testValidateAndUpdateCacheWithNoVolume() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMRequest originalRequest = createBucketRequest(bucketName, volumeName,
+false, StorageTypeProto.SSD);
+
+OMBucketCreateRequest omBucketCreateRequest =
+new OMBucketCreateRequest(originalRequest);
+
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+
+// As we have not still called validateAndUpdateCache, get() should
+// return null.
+
+Assert.assertNull(omMetadataManager.getBucketTable().get(bucketKey));
+
+OMClientResponse omClientResponse =
+omBucketCreateRequest.validateAndUpdateCache(ozoneManager, 1);
+
+OMResponse omResponse = omClientResponse.getOMResponse();
+Assert.assertNotNull(omResponse.getCreateBucketResponse());
+

[GitHub] [hadoop] hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288804803
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/package-info.java
 ##
 @@ -0,0 +1,21 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * This package contains classes for handling OMRequest's.
+ */
 
 Review comment:
   Typo: OMRequests


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288721948
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
 ##
 @@ -296,11 +281,7 @@ public OmBucketInfo setBucketProperty(OmBucketArgs args) 
throws IOException {
   bucketInfoBuilder.setCreationTime(oldBucketInfo.getCreationTime());
 
   OmBucketInfo omBucketInfo = bucketInfoBuilder.build();
-
-  if (!isRatisEnabled) {
-commitSetBucketPropertyInfoToDB(omBucketInfo);
-  }
-  return omBucketInfo;
+  commitSetBucketPropertyInfoToDB(omBucketInfo);
 
 Review comment:
   commitSetBucketPropertyInfoToDB() just calls commitCreateBucketInfoToDB() 
without any modification. We can directly call commitCreateBucketInfoToDB() 
here (and maybe rename it to commitBucketInfoToDB ? ).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #850: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497148116
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 85 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 757 | trunk passed |
   | +1 | compile | 329 | trunk passed |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 982 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 335 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 556 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 527 | the patch passed |
   | +1 | compile | 282 | the patch passed |
   | +1 | cc | 282 | the patch passed |
   | +1 | javac | 282 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 641 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 72 | hadoop-ozone generated 3 new + 5 unchanged - 0 fixed = 
8 total (was 5) |
   | +1 | findbugs | 537 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 263 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1407 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 7085 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.om.TestScmSafeMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux 05439e3af54b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0ead209 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/9/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/9/testReport/ |
   | Max. process+thread count | 4948 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288802440
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMBucketSetPropertyRequest.java
 ##
 @@ -0,0 +1,187 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.io.IOException;
+import java.util.List;
+
+import com.google.common.base.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.KeyValueUtil;
+import org.apache.hadoop.ozone.om.helpers.OmBucketArgs;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.response.OMBucketSetPropertyResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle SetBucketProperty Request.
+ */
+public class OMBucketSetPropertyRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMBucketSetPropertyRequest.class);
+
+  public OMBucketSetPropertyRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+return getOmRequest();
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+OMMetrics omMetrics = ozoneManager.getOmMetrics();
+
+// This will never be null, on a real Ozone cluster. For tests this might
+// be null. using mockito, to set omMetrics object, but still getting
+// null. For now added this not null check.
+if (omMetrics != null) {
+  omMetrics.incNumBucketUpdates();
+}
 
 Review comment:
   Lets add a TODO here to keep track of this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288722111
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
 ##
 @@ -97,6 +112,101 @@ private static long nextCallId() {
 return CALL_ID_COUNTER.getAndIncrement() & Long.MAX_VALUE;
   }
 
+  /**
+   * Submit request to Ratis server.
+   * @param omRequest
+   * @return OMResponse - response returned to the client.
+   * @throws ServiceException
+   */
+  public OMResponse submitRequest(OMRequest omRequest) throws ServiceException 
{
+RaftClientRequest raftClientRequest =
+createWriteRaftClientRequest(omRequest);
+RaftClientReply raftClientReply;
+try {
+  raftClientReply = server.submitClientRequestAsync(raftClientRequest)
+  .get();
+} catch (Exception ex) {
+  throw new ServiceException(ex.getMessage(), ex);
+}
+
+return processReply(omRequest, raftClientReply);
+  }
+
+  /**
+   * Create Write RaftClient request from OMRequest.
+   * @param omRequest
+   * @return
+   */
+  private RaftClientRequest createWriteRaftClientRequest(OMRequest omRequest) {
+return new RaftClientRequest(clientId, server.getId(), raftGroupId,
+nextCallId(),
+Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
+RaftClientRequest.writeRequestType(), null);
+  }
+
+  /**
+   * Process the raftClientReply and return OMResponse.
+   * @param omRequest
+   * @param reply
+   * @return
+   * @throws ServiceException
+   */
+  private OMResponse processReply(OMRequest omRequest, RaftClientReply reply)
+  throws ServiceException {
+// NotLeader exception is thrown only when the raft server to which the
+// request is submitted is not the leader. This can happen first time
+// when client is submitting request to OM.
+NotLeaderException notLeaderException = reply.getNotLeaderException();
+if (notLeaderException != null) {
+  throw new ServiceException(notLeaderException);
+}
+StateMachineException stateMachineException =
+reply.getStateMachineException();
+if (stateMachineException != null) {
+  OMResponse.Builder omResponse = OMResponse.newBuilder();
+  omResponse.setCmdType(omRequest.getCmdType());
+  omResponse.setSuccess(false);
+  omResponse.setMessage(stateMachineException.getCause().getMessage());
+  omResponse.setStatus(parseErrorStatus(
+  stateMachineException.getCause().getMessage()));
+  return omResponse.build();
+}
+
+try {
+  return OMRatisHelper.getOMResponseFromRaftClientReply(reply);
+} catch (InvalidProtocolBufferException ex) {
+  if (ex.getMessage() != null) {
+throw new ServiceException(ex.getMessage(), ex);
+  } else {
+throw new ServiceException(ex);
+  }
+}
+
+// TODO: Still need to handle RaftRetry failure exception and
+//  NotReplicated exception.
 
 Review comment:
   How are these exceptions handled currently?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288791912
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMBucketCreateRequest.java
 ##
 @@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.crypto.CipherSuite;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketEncryptionInfoProto;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CryptoProtocolVersionProto.ENCRYPTION_ZONES;
+
+/**
+ * Handles CreateBucket Request.
+ */
+public class OMBucketCreateRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMBucketCreateRequest.class);
+
+  public OMBucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+
+// Get original request.
+CreateBucketRequest createBucketRequest =
+getOmRequest().getCreateBucketRequest();
+BucketInfo bucketInfo = createBucketRequest.getBucketInfo();
+
+// Get KMS provider.
+KeyProviderCryptoExtension kmsProvider =
+ozoneManager.getKmsProvider();
+
+// Create new Bucket request with new bucket info.
+CreateBucketRequest.Builder newCreateBucketRequest =
+createBucketRequest.toBuilder();
+
+BucketInfo.Builder newBucketInfo = bucketInfo.toBuilder();
+
+newCreateBucketRequest.setBucketInfo(
 
 Review comment:
   newCreateBucketInfo.setBucketInfo is done later. Should only set the 
creation time here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hanishakoneru commented on a change in pull request #850: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288807522
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java
 ##
 @@ -390,7 +405,11 @@ private OMBucketCreateResponse createBucket(String 
volumeName,
 OmBucketInfo omBucketInfo =
 OmBucketInfo.newBuilder().setVolumeName(volumeName)
 .setBucketName(bucketName).setCreationTime(Time.now()).build();
-return new OMBucketCreateResponse(omBucketInfo);
+return new OMBucketCreateResponse(omBucketInfo, OMResponse.newBuilder()
 
 Review comment:
   OMDummyCreateBucketResponse seems to be doing the same thing as 
OMBucketCreateResponse. Why do we need 2 different tests?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-05-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851399#comment-16851399
 ] 

Hadoop QA commented on HADOOP-16314:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
18s{color} | {color:green} root: The patch generated 0 new + 124 unchanged - 3 
fixed = 124 total (was 127) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 12m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
50s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
53s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
0s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 18s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not 

[GitHub] [hadoop] bharatviswa504 opened a new pull request #871: HDDS-1579. Create OMDoubleBuffer metrics.

2019-05-29 Thread GitBox
bharatviswa504 opened a new pull request #871: HDDS-1579. Create OMDoubleBuffer 
metrics.
URL: https://github.com/apache/hadoop/pull/871
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #850: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497142393
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 67 | Maven dependency ordering for branch |
   | +1 | mvninstall | 561 | trunk passed |
   | +1 | compile | 296 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 839 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | trunk passed |
   | 0 | spotbugs | 337 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 543 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 504 | the patch passed |
   | +1 | compile | 288 | the patch passed |
   | +1 | cc | 288 | the patch passed |
   | +1 | javac | 288 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 661 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 80 | hadoop-ozone generated 4 new + 5 unchanged - 0 fixed = 
9 total (was 5) |
   | +1 | findbugs | 513 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 229 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2229 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 7485 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.om.TestOmInit |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux d715f54a540b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0ead209 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/8/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/8/testReport/ |
   | Max. process+thread count | 3668 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-05-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851384#comment-16851384
 ] 

Hadoop QA commented on HADOOP-16314:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 19s{color} | {color:orange} root: The patch generated 21 new + 124 unchanged 
- 3 fixed = 145 total (was 127) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
48s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
22s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
18s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
50s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 51s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
59s{color} | {color:green} The patch does not 

[GitHub] [hadoop] xiaoyuyao merged pull request #830: HDDS-1530. Freon support big files larger than 2GB and add --bufferSize and --validateWrites options.

2019-05-29 Thread GitBox
xiaoyuyao merged pull request #830: HDDS-1530. Freon support big files larger 
than 2GB and add --bufferSize and --validateWrites options.
URL: https://github.com/apache/hadoop/pull/830
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #830: HDDS-1530. Freon support big files larger than 2GB and add --bufferSize and --validateWrites options.

2019-05-29 Thread GitBox
xiaoyuyao commented on issue #830: HDDS-1530. Freon support big files larger 
than 2GB and add --bufferSize and --validateWrites options.
URL: https://github.com/apache/hadoop/pull/830#issuecomment-497138201
 
 
   +1, I will merge/commit this shortly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #830: HDDS-1530. Freon support big files larger than 2GB and add --bufferSize and --validateWrites options.

2019-05-29 Thread GitBox
xiaoyuyao commented on a change in pull request #830: HDDS-1530. Freon support 
big files larger than 2GB and add --bufferSize and --validateWrites options.
URL: https://github.com/apache/hadoop/pull/830#discussion_r288799644
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -622,7 +642,11 @@ public void run() {
 try (Scope writeScope = GlobalTracer.get()
 .buildSpan("writeKeyData")
 .startActive(true)) {
-  os.write(keyValue);
+  for (long nrRemaining = keySize - randomValue.length;
+nrRemaining > 0; nrRemaining -= bufferSize) {
+int curSize = (int)Math.min(bufferSize, nrRemaining);
+os.write(keyValueBuffer, 0, curSize);
 
 Review comment:
   You are right. No issue at the socket layer. I'm thinking of the DN side, 
the chunk files of the same key being written could be the same in this scheme. 
That might increase the write performance compared with 2GB fully random 
chunks.  As long we use it consistently, it should be fine. Later on, we will 
can an option to write 0 only by default and  random up to buffersize when a 
parameter is specified. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #850: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497134161
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 501 | trunk passed |
   | +1 | compile | 253 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 807 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 136 | trunk passed |
   | 0 | spotbugs | 286 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 469 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 486 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | cc | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 667 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 73 | hadoop-ozone generated 4 new + 5 unchanged - 0 fixed = 
9 total (was 5) |
   | +1 | findbugs | 557 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 250 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1515 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 6502 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux db3c35b0cc9f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0ead209 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/7/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/7/testReport/ |
   | Max. process+thread count | 3812 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #850: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497128592
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 71 | Maven dependency ordering for branch |
   | +1 | mvninstall | 554 | trunk passed |
   | +1 | compile | 258 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 791 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 143 | trunk passed |
   | 0 | spotbugs | 303 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 491 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 483 | the patch passed |
   | +1 | compile | 263 | the patch passed |
   | +1 | cc | 263 | the patch passed |
   | +1 | javac | 263 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 630 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 73 | hadoop-ozone generated 4 new + 5 unchanged - 0 fixed = 
9 total (was 5) |
   | +1 | findbugs | 503 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 242 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1007 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 57 | The patch does not generate ASF License warnings. |
   | | | 5964 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux 1d3000f18253 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 751f0df |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/6/testReport/ |
   | Max. process+thread count | 5137 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
xiaoyuyao commented on issue #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#issuecomment-497123569
 
 
   +1, pending Jekens.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#issuecomment-497122435
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 526 | trunk passed |
   | +1 | compile | 260 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 810 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 133 | trunk passed |
   | 0 | spotbugs | 291 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 469 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | +1 | mvninstall | 489 | the patch passed |
   | +1 | compile | 264 | the patch passed |
   | +1 | cc | 264 | the patch passed |
   | +1 | javac | 264 | the patch passed |
   | +1 | checkstyle | 76 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 628 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 72 | hadoop-ozone generated 9 new + 5 unchanged - 0 fixed = 
14 total (was 5) |
   | +1 | findbugs | 491 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 233 | hadoop-hdds in the patch passed. |
   | -1 | unit | 113 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 4976 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/847 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 287efc08253b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 751f0df |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/2/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/2/testReport/ |
   | Max. process+thread count | 1328 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #843: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #843: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/843#issuecomment-497118740
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 49 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 28 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 77 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1150 | trunk passed |
   | +1 | compile | 1151 | trunk passed |
   | +1 | checkstyle | 144 | trunk passed |
   | +1 | mvnsite | 122 | trunk passed |
   | +1 | shadedclient | 1033 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 97 | trunk passed |
   | 0 | spotbugs | 66 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 200 | trunk passed |
   | -0 | patch | 98 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 84 | the patch passed |
   | +1 | compile | 1407 | the patch passed |
   | +1 | javac | 1407 | the patch passed |
   | -0 | checkstyle | 175 | root: The patch generated 12 new + 98 unchanged - 
4 fixed = 110 total (was 102) |
   | +1 | mvnsite | 141 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 5 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 762 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 102 | the patch passed |
   | +1 | findbugs | 244 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 604 | hadoop-common in the patch passed. |
   | +1 | unit | 295 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 58 | The patch does not generate ASF License warnings. |
   | | | 7933 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/843 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 174a51b985e1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 751f0df |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/4/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/4/testReport/ |
   | Max. process+thread count | 1387 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-843/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #850: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…

2019-05-29 Thread GitBox
bharatviswa504 commented on issue #850: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497108920
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16336) finish variable is unused in ZStandardCompressor

2019-05-29 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-16336:
--
Summary: finish variable is unused in ZStandardCompressor  (was: finish 
valiable is unused in ZStandardCompressor)

> finish variable is unused in ZStandardCompressor
> 
>
> Key: HADOOP-16336
> URL: https://issues.apache.org/jira/browse/HADOOP-16336
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Daniel Templeton
>Priority: Trivial
>  Labels: newbie
>
> The boolean {{finish}} variable is unused and can be removed:
> {code:java}
>   private boolean finish, finished;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16336) finish valiable is unused in ZStandardCompressor

2019-05-29 Thread Daniel Templeton (JIRA)
Daniel Templeton created HADOOP-16336:
-

 Summary: finish valiable is unused in ZStandardCompressor
 Key: HADOOP-16336
 URL: https://issues.apache.org/jira/browse/HADOOP-16336
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.3.0
Reporter: Daniel Templeton


The boolean {{finish}} variable is unused and can be removed:

{code:java}
  private boolean finish, finished;
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
anuengineer commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288762291
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -471,12 +500,55 @@ message OzoneAclInfo {
 repeated OzoneAclRights rights = 3;
 }
 
+message GetAclRequest {
+  required OzoneObj obj = 1;
+}
+
+message GetAclResponse {
+  repeated OzoneAclInfo acls = 1;
+}
+
+message AddAclRequest {
+  required OzoneObj obj = 1;
+  required OzoneAclInfo acl = 2;
+}
+
+message AddAclResponse {
+  required bool response = 1;
+}
+
+message RemoveAclRequest {
+  required OzoneObj obj = 1;
+  required OzoneAclInfo acl = 2;
+}
+
+message RemoveAclResponse {
+  required bool response = 1;
+}
+
+message SetAclRequest {
+  required OzoneObj obj = 1;
+  repeated OzoneAclInfo acl = 2;
+}
+
+message SetAclResponse {
+  required bool response = 1;
+}
+
+message DeleteAclRequest {
 
 Review comment:
   ok
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
anuengineer commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288762165
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -75,9 +77,20 @@ public OzoneAcl(ACLIdentityType type, String name, ACLType 
acl) {
* @param name - Name of user
* @param acls - Rights
*/
-  public OzoneAcl(ACLIdentityType type, String name, List acls) {
+  public OzoneAcl(ACLIdentityType type, String name, BitSet acls) {
+Objects.requireNonNull(type);
+Objects.requireNonNull(acls);
 
 Review comment:
   No, I don't think so. If it is world, it has to be world. For Anonymous, you 
can insist that the client set the right value. So S3gateway will have to do 
the right thing to talk to us.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
anuengineer commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288761616
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/protocolPB/OMPBHelper.java
 ##
 @@ -122,10 +121,9 @@ public static OzoneAcl convertOzoneAcl(OzoneAclInfo 
aclInfo) {
   throw new IllegalArgumentException("ACL type is not recognized");
 }
 
-List aclRights = new ArrayList<>();
-for (OzoneAclRights acl : aclInfo.getRightsList()) {
-  aclRights.add(ACLType.valueOf(acl.name()));
-}
+BitSet aclRights = new BitSet(aclInfo.getRightsList().size());
 
 Review comment:
   I would have used a int since you really need only 9 bits. But I come from a 
C background not sure if Java does that at all.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer merged pull request #870: HDDS-1542. Create Radix tree to support ozone prefix ACLs. Contribute…

2019-05-29 Thread GitBox
anuengineer merged pull request #870: HDDS-1542. Create Radix tree to support 
ozone prefix ACLs. Contribute…
URL: https://github.com/apache/hadoop/pull/870
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288756288
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
 ##
 @@ -444,4 +446,50 @@ public String getCanonicalServiceName() {
 return proxy.getCanonicalServiceName();
   }
 
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   *
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288756235
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -75,9 +77,20 @@ public OzoneAcl(ACLIdentityType type, String name, ACLType 
acl) {
* @param name - Name of user
* @param acls - Rights
*/
-  public OzoneAcl(ACLIdentityType type, String name, List acls) {
+  public OzoneAcl(ACLIdentityType type, String name, BitSet acls) {
+Objects.requireNonNull(type);
+Objects.requireNonNull(acls);
+
+if(acls.cardinality() > ACLType.getNoOfAcls()) {
+  throw new IllegalArgumentException("Acl bitset passed has unexpected " +
+  "size. bitset size:" + acls.cardinality() + ", bitset:"
+  + acls.toString());
+}
+
+this.aclBitSet = new BitSet();
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288756168
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -56,8 +58,8 @@ public OzoneAcl() {
*/
   public OzoneAcl(ACLIdentityType type, String name, ACLType acl) {
 this.name = name;
-this.rights = new ArrayList<>();
-this.rights.add(acl);
+this.aclBitSet = new BitSet(ACLType.values().length);
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288756124
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/protocolPB/OMPBHelper.java
 ##
 @@ -122,10 +121,9 @@ public static OzoneAcl convertOzoneAcl(OzoneAclInfo 
aclInfo) {
   throw new IllegalArgumentException("ACL type is not recognized");
 }
 
-List aclRights = new ArrayList<>();
-for (OzoneAclRights acl : aclInfo.getRightsList()) {
-  aclRights.add(ACLType.valueOf(acl.name()));
-}
+BitSet aclRights = new BitSet(aclInfo.getRightsList().size());
 
 Review comment:
   Bitset is concise,space efficient and enforces order since we are settings 
them in order of our enum list. I am open to any suggestion to improve it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288755580
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneObjInfo.java
 ##
 @@ -69,6 +72,32 @@ public String getKeyName() {
 return keyName;
   }
 
+  public static OzoneObjInfo fromProtobuf(OzoneManagerProtocolProtos.OzoneObj
+  proto) {
+Builder builder = new Builder()
+.setResType(ResourceType.valueOf(proto.getResType().name()))
+.setStoreType(StoreType.valueOf(proto.getStoreType().name()));
+StringTokenizer tokenizer = new StringTokenizer(proto.getPath(),
+OzoneConsts.OZONE_URI_DELIMITER);
+// Set volume name.
+if (tokenizer.hasMoreTokens()) {
+  builder.setVolumeName(tokenizer.nextToken());
+}
+// Set bucket name.
+if (tokenizer.hasMoreTokens()) {
+  builder.setBucketName(tokenizer.nextToken());
+}
+// Set key name
+StringBuffer sb = new StringBuffer();
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288755528
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -471,12 +500,55 @@ message OzoneAclInfo {
 repeated OzoneAclRights rights = 3;
 }
 
+message GetAclRequest {
+  required OzoneObj obj = 1;
+}
+
+message GetAclResponse {
+  repeated OzoneAclInfo acls = 1;
+}
+
+message AddAclRequest {
+  required OzoneObj obj = 1;
+  required OzoneAclInfo acl = 2;
+}
+
+message AddAclResponse {
+  required bool response = 1;
+}
+
+message RemoveAclRequest {
+  required OzoneObj obj = 1;
+  required OzoneAclInfo acl = 2;
+}
+
+message RemoveAclResponse {
+  required bool response = 1;
+}
+
+message SetAclRequest {
+  required OzoneObj obj = 1;
+  repeated OzoneAclInfo acl = 2;
+}
+
+message SetAclResponse {
+  required bool response = 1;
+}
+
+message DeleteAclRequest {
 
 Review comment:
   @anuengineer CLIENT_IP was discussed when we added Acl api for Ranger. Idea 
is to support acls based on ip/ip-range. Might be of use in future.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288755156
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2984,6 +2986,88 @@ public OmKeyInfo lookupFile(OmKeyArgs args) throws 
IOException {
 }
   }
 
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   *
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   * @throws IOException if there is error.
+   */
+  @Override
+  public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
+if(isAclEnabled) {
+  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
+  obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
+}
+if(obj.getResourceType().equals(ResourceType.VOLUME)) {
+  return volumeManager.addAcl(obj, acl);
+}
+
+return false;
+  }
+
+  /**
+   * Remove acl for Ozone object. Return true if acl is removed successfully
+   * else false.
+   *
+   * @param obj Ozone object.
+   * @param acl Ozone acl to be removed.
+   * @throws IOException if there is error.
+   */
+  @Override
+  public boolean removeAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
+if(isAclEnabled) {
+  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
+  obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
+}
+if(obj.getResourceType().equals(ResourceType.VOLUME)) {
+  return volumeManager.removeAcl(obj, acl);
+}
+
+return false;
+  }
+
+  /**
+   * Acls to be set for given Ozone object. This operations reset ACL for given
+   * object to list of ACLs provided in argument.
+   *
+   * @param obj Ozone object.
+   * @param acls List of acls.
+   * @throws IOException if there is error.
+   */
+  @Override
+  public boolean setAcl(OzoneObj obj, List acls) throws IOException {
+if(isAclEnabled) {
+  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
+  obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
+}
+if(obj.getResourceType().equals(ResourceType.VOLUME)) {
+  return volumeManager.setAcl(obj, acls);
+}
+
+return false;
+  }
+
+  /**
+   * Returns list of ACLs for given Ozone object.
+   *
+   * @param obj Ozone object.
+   * @throws IOException if there is error.
+   */
+  @Override
+  public List getAcl(OzoneObj obj) throws IOException {
+if(isAclEnabled) {
+  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288755097
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2984,6 +2986,88 @@ public OmKeyInfo lookupFile(OmKeyArgs args) throws 
IOException {
 }
   }
 
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   *
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   * @throws IOException if there is error.
+   */
+  @Override
+  public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
+if(isAclEnabled) {
+  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
+  obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
+}
+if(obj.getResourceType().equals(ResourceType.VOLUME)) {
+  return volumeManager.addAcl(obj, acl);
+}
+
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288755189
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -471,12 +500,55 @@ message OzoneAclInfo {
 repeated OzoneAclRights rights = 3;
 }
 
+message GetAclRequest {
+  required OzoneObj obj = 1;
+}
+
+message GetAclResponse {
+  repeated OzoneAclInfo acls = 1;
+}
+
+message AddAclRequest {
+  required OzoneObj obj = 1;
+  required OzoneAclInfo acl = 2;
+}
+
+message AddAclResponse {
+  required bool response = 1;
+}
+
+message RemoveAclRequest {
+  required OzoneObj obj = 1;
+  required OzoneAclInfo acl = 2;
+}
+
+message RemoveAclResponse {
+  required bool response = 1;
+}
+
+message SetAclRequest {
+  required OzoneObj obj = 1;
+  repeated OzoneAclInfo acl = 2;
+}
+
+message SetAclResponse {
+  required bool response = 1;
+}
+
+message DeleteAclRequest {
 
 Review comment:
   Looks redundant, removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288755011
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -446,6 +459,22 @@ message BucketArgs {
 repeated hadoop.hdds.KeyValue metadata = 7;
 }
 
+message OzoneObj {
+  enum ObjectType {
+VOLUME = 1;
+BUCKET = 2;
+KEY = 3;
+  }
+
+  enum StoreType {
+OZONE = 1;
+S3 = 2;
+  }
+  required ObjectType resType = 1;
+  required StoreType storeType = 2;
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288754975
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/protocolPB/OMPBHelper.java
 ##
 @@ -84,15 +84,14 @@ public static OzoneAclInfo convertOzoneAcl(OzoneAcl acl) {
 default:
   throw new IllegalArgumentException("ACL type is not recognized");
 }
-List aclRights = new ArrayList<>();
-
-for (ACLType right : acl.getRights()) {
-  aclRights.add(OzoneAclRights.valueOf(right.name()));
-}
+List ozAclRights =
+new ArrayList<>(acl.getAclBitSet().cardinality());
+acl.getAclBitSet().stream().forEach(a -> ozAclRights.add(
+OzoneAclRights.valueOf(ACLType.values()[a].name(;
 
 
 Review comment:
   @anuengineer  thanks for review. I remember that discussion, i think reason 
was too many type castings involved in in streams. I dont see any consistent 
approach to tackle it. (We are using streams all over the place) Updated other 
stream usage to foreach loop. In this particular case  foreach doesn't support 
Bitset so left it as it is. Hopefully performance of streams and parallel 
streams will improve in java.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #868: HDDS-1568 : Add RocksDB metrics to OM.

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #868: HDDS-1568 : Add RocksDB metrics to OM.
URL: https://github.com/apache/hadoop/pull/868#issuecomment-497091130
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 519 | trunk passed |
   | +1 | compile | 255 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 821 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   | 0 | spotbugs | 296 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 492 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | -1 | mvninstall | 178 | hadoop-ozone in the patch failed. |
   | +1 | compile | 265 | the patch passed |
   | +1 | javac | 265 | the patch passed |
   | +1 | checkstyle | 95 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 663 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | the patch passed |
   | +1 | findbugs | 496 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 251 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1284 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 69 | The patch does not generate ASF License warnings. |
   | | | 6195 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-868/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/868 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux df569ff468e5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / abf76ac |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-868/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-868/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-868/2/testReport/ |
   | Max. process+thread count | 5385 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-868/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #870: HDDS-1542. Create Radix tree to support ozone prefix ACLs. Contribute…

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #870: HDDS-1542. Create Radix tree to support 
ozone prefix ACLs. Contribute…
URL: https://github.com/apache/hadoop/pull/870#issuecomment-497088055
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 525 | trunk passed |
   | +1 | compile | 253 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 825 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | trunk passed |
   | 0 | spotbugs | 292 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 472 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 487 | the patch passed |
   | +1 | compile | 288 | the patch passed |
   | +1 | javac | 288 | the patch passed |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 714 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 141 | the patch passed |
   | +1 | findbugs | 496 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 222 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1133 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6106 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-870/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/870 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 25bab91020a0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / abf76ac |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-870/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-870/2/testReport/ |
   | Max. process+thread count | 4870 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common U: hadoop-ozone/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-870/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on a change in pull request #868: HDDS-1568 : Add RocksDB metrics to OM.

2019-05-29 Thread GitBox
avijayanhwx commented on a change in pull request #868: HDDS-1568 : Add RocksDB 
metrics to OM.
URL: https://github.com/apache/hadoop/pull/868#discussion_r288741003
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStoreBuilder.java
 ##
 @@ -187,7 +196,13 @@ private DBOptions getDbProfile() {
 
 if (option == null) {
   LOG.info("Using default options. {}", dbProfile.toString());
-  return dbProfile.getDBOptions();
+  option = dbProfile.getDBOptions();
+}
+
+if (!rocksDbStat.equals(OZONE_METADATA_STORE_ROCKSDB_STATISTICS_OFF)) {
 
 Review comment:
   @anuengineer We don't enable the metrics by default. I just used the same 
config that is used to enable metrics for SCM RocksDB, to enable metrics for OM 
RocksDB as well. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #829: HDDS-1550. MiniOzoneCluster is not shutting down all the threads during shutdown. Contributed by Mukul Kumar Singh.

2019-05-29 Thread GitBox
anuengineer commented on issue #829: HDDS-1550. MiniOzoneCluster is not 
shutting down all the threads during shutdown. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/829#issuecomment-497083934
 
 
   I am also +1, but I really agree with @bharatviswa504 , we probably should 
not wait for a day for the MiniOzoneCluster to shutdown. +1 after fixing that 
issue. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #861: HDDS-1596. Create service endpoint to download configuration from SCM

2019-05-29 Thread GitBox
anuengineer commented on a change in pull request #861: HDDS-1596. Create 
service endpoint to download configuration from SCM
URL: https://github.com/apache/hadoop/pull/861#discussion_r288733426
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
 ##
 @@ -141,7 +142,12 @@ public Void call() throws Exception {
   StringUtils
   .startupShutdownMessage(HddsDatanodeService.class, args, LOG);
 }
-start(createOzoneConfiguration());
+OzoneConfiguration ozoneConfiguration = createOzoneConfiguration();
+if (DiscoveryUtil.loadGlobalConfig(ozoneConfiguration)) {
+  //reload the configuratioin with the dowloaded  new configs.
 
 Review comment:
   nit: typo "configuratioin"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #861: HDDS-1596. Create service endpoint to download configuration from SCM

2019-05-29 Thread GitBox
anuengineer commented on a change in pull request #861: HDDS-1596. Create 
service endpoint to download configuration from SCM
URL: https://github.com/apache/hadoop/pull/861#discussion_r288734823
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/discovery/ConfigurationXmlEntry.java
 ##
 @@ -0,0 +1,56 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.discovery;
+
+import javax.xml.bind.annotation.XmlElement;
+
+/**
+ * JAXB representation of one property of a hadoop configuration XML.
+ */
+public class ConfigurationXmlEntry {
+
+  @XmlElement
+  private String name;
+
+  @XmlElement
+  private String value;
 
 Review comment:
   Don't we also have Tags and Comments ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #861: HDDS-1596. Create service endpoint to download configuration from SCM

2019-05-29 Thread GitBox
anuengineer commented on a change in pull request #861: HDDS-1596. Create 
service endpoint to download configuration from SCM
URL: https://github.com/apache/hadoop/pull/861#discussion_r288732873
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/discovery/DiscoveryUtil.java
 ##
 @@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.discovery;
+
+import java.io.File;
+import java.io.FileOutputStream;
+import java.net.URL;
+import java.nio.channels.Channels;
+import java.nio.channels.ReadableByteChannel;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Utility to download ozone configuration from SCM.
+ */
+public final class DiscoveryUtil {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DiscoveryUtil.class);
+
+  public static final String OZONE_GLOBAL_XML = "ozone-global.xml";
+
+  private DiscoveryUtil() {
+  }
+
+  /**
+   * Download ozone-global.conf from SCM to the local HADOOP_CONF_DIR.
+   */
+  public static boolean loadGlobalConfig(OzoneConfiguration conf) {
+String hadoopConfDir = System.getenv("HADOOP_CONF_DIR");
+if (hadoopConfDir == null || hadoopConfDir.isEmpty()) {
+  LOG.warn(
+  "HADOOP_CONF_DIR is not set, can't download ozone-global.xml from "
+  + "SCM.");
+  return false;
+}
+if (conf.get("ozone.scm.names") == null) {
+  LOG.warn("ozone.scm.names is not set. Can't download config from scm.");
+  return false;
+}
+for (int i = 0; i < 60; i++) {
 
 Review comment:
   I am really conflicted about this line. One side: I think we should write 
the download code as a small function and then allow the user to pass max wait 
time. On the other hand, I like that simplicity for the user. It is get config, 
and wait until a reasonable time to get it. I am not really sure what is the 
best approach. Just leaving the comment here to consider the thought and do 
want you like. either one works.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288734357
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManager.java
 ##
 @@ -141,4 +143,43 @@ boolean checkVolumeAccess(String volume, OzoneAclInfo 
userAcl)
*/
   List listVolumes(String userName, String prefix,
   String startKey, int maxKeys) throws IOException;
+
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   *
+   * @throws IOException if there is error.
+   * */
+  boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException;
 
 Review comment:
   Since OzoneObj is wrapper remaining information might be used later to 
differentiate b/w stores. Also it keeps API consistent for all entities. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on a change in pull request #868: HDDS-1568 : Add RocksDB metrics to OM.

2019-05-29 Thread GitBox
avijayanhwx commented on a change in pull request #868: HDDS-1568 : Add RocksDB 
metrics to OM.
URL: https://github.com/apache/hadoop/pull/868#discussion_r288731790
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RocksDBStoreMBean.java
 ##
 @@ -41,13 +49,21 @@
 /**
  * Adapter JMX bean to publish all the Rocksdb metrics.
  */
-public class RocksDBStoreMBean implements DynamicMBean {
+public class RocksDBStoreMBean implements DynamicMBean, MetricsSource {
 
 
 Review comment:
   @anuengineer I verified by checking the /prom servlet end point. I also made 
changes in the prometheus sink metric name sanitization code so that the 
RocksDB metric names don't look odd. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #806: HDDS-1224. Restructure code to validate the response from server in the Read path

2019-05-29 Thread GitBox
hadoop-yetus commented on issue #806: HDDS-1224. Restructure code to validate 
the response from server in the Read path
URL: https://github.com/apache/hadoop/pull/806#issuecomment-497075978
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 70 | Maven dependency ordering for branch |
   | +1 | mvninstall | 535 | trunk passed |
   | +1 | compile | 273 | trunk passed |
   | +1 | checkstyle | 87 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 886 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 290 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 483 | trunk passed |
   | -0 | patch | 335 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 470 | the patch passed |
   | +1 | compile | 296 | the patch passed |
   | +1 | javac | 296 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-hdds: The patch generated 21 new + 0 
unchanged - 0 fixed = 21 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 647 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 75 | hadoop-hdds generated 5 new + 14 unchanged - 0 fixed = 
19 total (was 14) |
   | +1 | findbugs | 481 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 237 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1582 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6690 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/806 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d554bfa28f11 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / abf76ac |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/3/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/3/artifact/out/diff-javadoc-javadoc-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/3/testReport/ |
   | Max. process+thread count | 5355 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288730176
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -75,9 +77,20 @@ public OzoneAcl(ACLIdentityType type, String name, ACLType 
acl) {
* @param name - Name of user
* @param acls - Rights
*/
-  public OzoneAcl(ACLIdentityType type, String name, List acls) {
+  public OzoneAcl(ACLIdentityType type, String name, BitSet acls) {
+Objects.requireNonNull(type);
+Objects.requireNonNull(acls);
 
 Review comment:
   name might be null when type is WORLD or ANONYMOUS.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #868: HDDS-1568 : Add RocksDB metrics to OM.

2019-05-29 Thread GitBox
anuengineer commented on a change in pull request #868: HDDS-1568 : Add RocksDB 
metrics to OM.
URL: https://github.com/apache/hadoop/pull/868#discussion_r288727594
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RocksDBStoreMBean.java
 ##
 @@ -41,13 +49,21 @@
 /**
  * Adapter JMX bean to publish all the Rocksdb metrics.
  */
-public class RocksDBStoreMBean implements DynamicMBean {
+public class RocksDBStoreMBean implements DynamicMBean, MetricsSource {
 
 
 Review comment:
   @elek  Will this automatically show up in the Promotheus ? or do we need to 
any extra plumbing ? See the comment in the JIRA where @jnp talks about 
Container metrics on Datanodes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #847: HDDS-1539. Implement addAcl, removeAcl, setAcl, getAcl for Volume. Contributed Ajay Kumar.

2019-05-29 Thread GitBox
ajayydv commented on a change in pull request #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#discussion_r288729707
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
 ##
 @@ -528,12 +538,11 @@ public void deleteVolume(String volume) throws 
IOException {
 
 ListVolumeResponse resp =
 handleError(submitRequest(omRequest)).getListVolumeResponse();
-
-
-
-return resp.getVolumeInfoList().stream()
-.map(item -> OmVolumeArgs.getFromProtobuf(item))
-.collect(Collectors.toList());
+List list = new ArrayList<>(resp.getVolumeInfoList().size());
 
 Review comment:
   now OmVolumeArgs#getFromProtobuf throws an exception which warranted this 
change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #868: HDDS-1568 : Add RocksDB metrics to OM.

2019-05-29 Thread GitBox
anuengineer commented on a change in pull request #868: HDDS-1568 : Add RocksDB 
metrics to OM.
URL: https://github.com/apache/hadoop/pull/868#discussion_r288728813
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStoreBuilder.java
 ##
 @@ -187,7 +196,13 @@ private DBOptions getDbProfile() {
 
 if (option == null) {
   LOG.info("Using default options. {}", dbProfile.toString());
-  return dbProfile.getDBOptions();
+  option = dbProfile.getDBOptions();
+}
+
+if (!rocksDbStat.equals(OZONE_METADATA_STORE_ROCKSDB_STATISTICS_OFF)) {
 
 Review comment:
   There is some history here. During our first release we found that RocksDB 
is also shipped by YARN. That version of RocksDB is very old, hence this call 
would fail in mysterious ways. @arp7  went and fixed that issue and made sure 
that we don't enable this by default. I am fine with enabling this, if we don't 
run into that old issue again. @elek, @arp7 any comments ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >