[jira] [Commented] (HDDS-4283) Remove unsupported upgrade command in ozone cli

2020-09-28 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203077#comment-17203077
 ] 

Attila Doroszlai commented on HDDS-4283:


I think this was already removed in HDDS-3992.

> Remove unsupported upgrade command in ozone cli
> ---
>
> Key: HDDS-4283
> URL: https://issues.apache.org/jira/browse/HDDS-4283
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> In HDDS-1383, we introduce a new upgrade command for supporting to in-place 
> upgrade from HDFS to Ozone.
> {noformat}
> upgrade   HDFS to Ozone in-place upgrade tool
> 
> Usage: ozone upgrade [-hV] [--verbose] [-conf=]
>  [-D=]... [COMMAND]
> Convert raw HDFS data to Ozone data without data movement.
>   --verbose   More verbose output. Show the stack trace of the errors.
>   -conf=
>   -D, --set=
>   -h, --help  Show this help message and exit.
>   -V, --version   Print version information and exit.
> Commands:
>   plan Plan existing HDFS block distribution and give.estimation.
>   balance  Move the HDFS blocks for a better distribution usage.
>   execute  Start/restart upgrade from HDFS to Ozone cluster.
> [hdfs@lyq ~]$ ~/ozone/bin/ozone upgrade plan
> [In-Place upgrade : plan] is not yet supported.
> [hdfs@lyq ~]$ ~/ozone/bin/ozone upgrade balance
> [In-Place upgrade : balance] is not yet supported.
> [hdfs@lyq ~]$ ~/ozone/bin/ozone upgrade execute
> In-Place upgrade : execute] is not yet supported
> {noformat}
> But this feature has not been implemented yet and is a very big feature. 
>  I don't think it's good to expose a cli command that is not supported and 
> meanwhile that cannot be quickly implemented in the short term.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-28 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r495775842



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -496,6 +491,28 @@ private static void verifyBucketName(String bucketName) 
throws OMException {
 }
   }
 
+  private static void verifyCountsQuota(long quota) throws OMException {
+if ((quota < OzoneConsts.QUOTA_RESET)) {

Review comment:
   double ((





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-28 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r495775842



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -496,6 +491,28 @@ private static void verifyBucketName(String bucketName) 
throws OMException {
 }
   }
 
+  private static void verifyCountsQuota(long quota) throws OMException {
+if ((quota < OzoneConsts.QUOTA_RESET)) {

Review comment:
   single ( is enough.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-28 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r495775842



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -496,6 +491,28 @@ private static void verifyBucketName(String bucketName) 
throws OMException {
 }
   }
 
+  private static void verifyCountsQuota(long quota) throws OMException {
+if ((quota < OzoneConsts.QUOTA_RESET)) {

Review comment:
   single (( is enough.

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -496,6 +491,28 @@ private static void verifyBucketName(String bucketName) 
throws OMException {
 }
   }
 
+  private static void verifyCountsQuota(long quota) throws OMException {
+if ((quota < OzoneConsts.QUOTA_RESET)) {
+  throw new IllegalArgumentException("Invalid values for quota : " +
+  "counts quota is :" + quota + ".");
+}
+  }
+
+  private static void verifySpaceQuota(long quota) throws OMException {
+if ((quota < OzoneConsts.QUOTA_RESET)) {

Review comment:
   same as above





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-28 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r495777131



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -334,15 +329,11 @@ public boolean setVolumeOwner(String volumeName, String 
owner)
   }
 
   @Override
-  public void setVolumeQuota(String volumeName, long quotaInCounts,
-  long quotaInBytes) throws IOException {
+  public void setVolumeQuota(String volumeName, long quotaInBytes,
+  long quotaInCounts) throws IOException {

Review comment:
   Can we align the  quotaInBytes and quotaInCounts order in setVolumeQuota 
and setBucketQuota?  In another words, no need for the code change here. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-28 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r495797897



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/SetQuotaHandler.java
##
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.shell.bucket;
+
+import org.apache.hadoop.hdds.client.OzoneQuota;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.shell.OzoneAddress;
+import org.apache.hadoop.ozone.shell.SetSpaceQuotaOptions;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+import java.io.IOException;
+
+/**
+ * set quota of the bucket.
+ */
+@Command(name = "setquota",
+description = "Set quota of the buckets")
+public class SetQuotaHandler extends BucketHandler {
+
+  @CommandLine.Mixin
+  private SetSpaceQuotaOptions quotaOptions;
+
+  @Option(names = {"--key-quota"},
+  description = "Key counts of the newly created bucket (eg. 5)")
+  private long quotaInCounts = OzoneConsts.QUOTA_RESET;
+
+  /**
+   * Executes create bucket.

Review comment:
   leftover statement





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #1419: HDDS-3755. [DESIGN] Storage-class for Ozone

2020-09-28 Thread GitBox


elek commented on pull request #1419:
URL: https://github.com/apache/hadoop-ozone/pull/1419#issuecomment-699905781


   >> Isn't this design doc a few steps away from coding? A ton of detail is 
missing around how SCM will manage multiple classes of pipelines and how 
replication manager will need to be modified.
   
   Would you be so kind add a few questions which can cover what are the 
missing parts for you? It would be a big help.
   
   We have a working POC which passes all the tests, but it is possible that 
some details are available only in the code. If you have any question I kindly 
ask to put it to here and I will answer them.
   
   >> All the replication logic (PipelineManager/ReplicationManager) will work 
exactly as before. Storage-class will be resolved to the required replication 
config. Pipelines will have the same type as before (eg. Ratis/THREE)
   
   > Really? It seems impossible that we can introduce new storage classes 
without requiring changes to the replication logic.
   
   This paragraph is talking about the initial implementation where we wouldn't 
like to introduce new storage-classes, they are hard coded as Ratis/THREE and 
Ratis/ONE are hard-coded today.
   
   Long-term it can be possible to introduce new storage-class without 
introducing new replication logic (for example Closed/TWO) or it can be 
possible to introduce totally new replication logic (like EC)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4283) Remove unsupported upgrade command in ozone cli

2020-09-28 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203145#comment-17203145
 ] 

Yiqun Lin commented on HDDS-4283:
-

Thanks [~adoroszlai] for the reference, closed this JIRA.

> Remove unsupported upgrade command in ozone cli
> ---
>
> Key: HDDS-4283
> URL: https://issues.apache.org/jira/browse/HDDS-4283
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> In HDDS-1383, we introduce a new upgrade command for supporting to in-place 
> upgrade from HDFS to Ozone.
> {noformat}
> upgrade   HDFS to Ozone in-place upgrade tool
> 
> Usage: ozone upgrade [-hV] [--verbose] [-conf=]
>  [-D=]... [COMMAND]
> Convert raw HDFS data to Ozone data without data movement.
>   --verbose   More verbose output. Show the stack trace of the errors.
>   -conf=
>   -D, --set=
>   -h, --help  Show this help message and exit.
>   -V, --version   Print version information and exit.
> Commands:
>   plan Plan existing HDFS block distribution and give.estimation.
>   balance  Move the HDFS blocks for a better distribution usage.
>   execute  Start/restart upgrade from HDFS to Ozone cluster.
> [hdfs@lyq ~]$ ~/ozone/bin/ozone upgrade plan
> [In-Place upgrade : plan] is not yet supported.
> [hdfs@lyq ~]$ ~/ozone/bin/ozone upgrade balance
> [In-Place upgrade : balance] is not yet supported.
> [hdfs@lyq ~]$ ~/ozone/bin/ozone upgrade execute
> In-Place upgrade : execute] is not yet supported
> {noformat}
> But this feature has not been implemented yet and is a very big feature. 
>  I don't think it's good to expose a cli command that is not supported and 
> meanwhile that cannot be quickly implemented in the short term.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4283) Remove unsupported upgrade command in ozone cli

2020-09-28 Thread Yiqun Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin resolved HDDS-4283.
-
Resolution: Duplicate

> Remove unsupported upgrade command in ozone cli
> ---
>
> Key: HDDS-4283
> URL: https://issues.apache.org/jira/browse/HDDS-4283
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>
> In HDDS-1383, we introduce a new upgrade command for supporting to in-place 
> upgrade from HDFS to Ozone.
> {noformat}
> upgrade   HDFS to Ozone in-place upgrade tool
> 
> Usage: ozone upgrade [-hV] [--verbose] [-conf=]
>  [-D=]... [COMMAND]
> Convert raw HDFS data to Ozone data without data movement.
>   --verbose   More verbose output. Show the stack trace of the errors.
>   -conf=
>   -D, --set=
>   -h, --help  Show this help message and exit.
>   -V, --version   Print version information and exit.
> Commands:
>   plan Plan existing HDFS block distribution and give.estimation.
>   balance  Move the HDFS blocks for a better distribution usage.
>   execute  Start/restart upgrade from HDFS to Ozone cluster.
> [hdfs@lyq ~]$ ~/ozone/bin/ozone upgrade plan
> [In-Place upgrade : plan] is not yet supported.
> [hdfs@lyq ~]$ ~/ozone/bin/ozone upgrade balance
> [In-Place upgrade : balance] is not yet supported.
> [hdfs@lyq ~]$ ~/ozone/bin/ozone upgrade execute
> In-Place upgrade : execute] is not yet supported
> {noformat}
> But this feature has not been implemented yet and is a very big feature. 
>  I don't think it's good to expose a cli command that is not supported and 
> meanwhile that cannot be quickly implemented in the short term.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3211) Make SCM HA configurable

2020-09-28 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203212#comment-17203212
 ] 

Nicholas Jiang commented on HDDS-3211:
--

[~timmylicheng], do you mean that add a switch configuration in 
OzoneConfiguration to control the start and join of StorageContainerManager?

> Make SCM HA configurable
> 
>
> Key: HDDS-3211
> URL: https://issues.apache.org/jira/browse/HDDS-3211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Li Cheng
>Priority: Major
>
> Need a switch in all path to turn on/off SCM HA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3197) BackgroundPipelineCreator can only serve leader

2020-09-28 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203224#comment-17203224
 ] 

Nicholas Jiang commented on HDDS-3197:
--

[~timmylicheng], does this issue indicate that refactor 
BackgroundPipelineCreator to BackgroundPipelineHandler, add leader check when 
start BackgroundPipelineCreator in onMessage of SCMPipelineManager?

> BackgroundPipelineCreator can only serve leader
> ---
>
> Key: HDDS-3197
> URL: https://issues.apache.org/jira/browse/HDDS-3197
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Li Cheng
>Priority: Major
>
> # Refactor to BackgroundPipelineHandler
>  # Only accept leader to trigger this.
>  # scrubber and creator has to only serve leader



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on pull request #1338: HDDS-4023. Delete closed container after all blocks have been deleted.

2020-09-28 Thread GitBox


sodonnel commented on pull request #1338:
URL: https://github.com/apache/hadoop-ozone/pull/1338#issuecomment-700016623


   This change looks almost good now. I wonder about two final things:
   
   1. In `updateContainerStats(...)` do you think we should return if the 
container is DELETING or DELETED without making any updates? If this is a stale 
replica, they it may not be empty and hence could update the container stats 
incorrectly. If the Container is already in DELETING or DELETED state, then we 
can ignore any changes to it, as we know the stale replica will get removed 
anyway.
   
   2. I am wondering if we could receive a stale replica when the container is 
DELETING. Then a replica would get added in `updateContainerReplica(...)`. Then 
later the container will go to DELETED and the state replica will get reported 
again - at that point we will send a delete command, but the replica will never 
get removed from memory now I think. Would it make sense to send the delete 
command for any replicas received when the container is DELETING or DELETED?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4285) Read is slow due to the frequent usage of UGI.getCurrentUserCall()

2020-09-28 Thread Marton Elek (Jira)
Marton Elek created HDDS-4285:
-

 Summary: Read is slow due to the frequent usage of 
UGI.getCurrentUserCall()
 Key: HDDS-4285
 URL: https://issues.apache.org/jira/browse/HDDS-4285
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek
 Attachments: image-2020-09-28-16-19-17-581.png, 
profile-20200928-161631-180518.svg

Ozone read operation turned out to be slow mainly because we do a new 
UGI.getCurrentUser for block token for each of the calls.

We need to cache the block token / UGI.getCurrentUserCall() to make it faster.

 !image-2020-09-28-16-19-17-581.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4285) Read is slow due to the frequent usage of UGI.getCurrentUserCall()

2020-09-28 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4285:
--
Attachment: profile-20200928-161631-180518.svg

> Read is slow due to the frequent usage of UGI.getCurrentUserCall()
> --
>
> Key: HDDS-4285
> URL: https://issues.apache.org/jira/browse/HDDS-4285
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Attachments: image-2020-09-28-16-19-17-581.png, 
> profile-20200928-161631-180518.svg
>
>
> Ozone read operation turned out to be slow mainly because we do a new 
> UGI.getCurrentUser for block token for each of the calls.
> We need to cache the block token / UGI.getCurrentUserCall() to make it faster.
>  !image-2020-09-28-16-19-17-581.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4285) Read is slow due to the frequent usage of UGI.getCurrentUserCall()

2020-09-28 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4285:
--
Description: 
Ozone read operation turned out to be slow mainly because we do a new 
UGI.getCurrentUser for block token for each of the calls.

We need to cache the block token / UGI.getCurrentUserCall() to make it faster.

 !image-2020-09-28-16-19-17-581.png! 

To reproduce:

Checkout: https://github.com/elek/hadoop-ozone/tree/mocked-read

{code}
cd hadoop-ozone/client

export 
MAVEN_OPTS=-agentpath:/home/elek/prog/async-profiler/build/libasyncProfiler.so=start,file=/tmp/profile-%t-%p.svg

mvn compile exec:java 
-Dexec.mainClass=org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit 
-Dexec.classpathScope=test
{code}

  was:
Ozone read operation turned out to be slow mainly because we do a new 
UGI.getCurrentUser for block token for each of the calls.

We need to cache the block token / UGI.getCurrentUserCall() to make it faster.

 !image-2020-09-28-16-19-17-581.png! 


> Read is slow due to the frequent usage of UGI.getCurrentUserCall()
> --
>
> Key: HDDS-4285
> URL: https://issues.apache.org/jira/browse/HDDS-4285
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Attachments: image-2020-09-28-16-19-17-581.png, 
> profile-20200928-161631-180518.svg
>
>
> Ozone read operation turned out to be slow mainly because we do a new 
> UGI.getCurrentUser for block token for each of the calls.
> We need to cache the block token / UGI.getCurrentUserCall() to make it faster.
>  !image-2020-09-28-16-19-17-581.png! 
> To reproduce:
> Checkout: https://github.com/elek/hadoop-ozone/tree/mocked-read
> {code}
> cd hadoop-ozone/client
> export 
> MAVEN_OPTS=-agentpath:/home/elek/prog/async-profiler/build/libasyncProfiler.so=start,file=/tmp/profile-%t-%p.svg
> mvn compile exec:java 
> -Dexec.mainClass=org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit 
> -Dexec.classpathScope=test
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4285) Read is slow due to the frequent usage of UGI.getCurrentUserCall()

2020-09-28 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203321#comment-17203321
 ] 

Yiqun Lin commented on HDDS-4285:
-

Looking into this, I am thinking of two approaches for this:

1. Get UGI instance in ChunkInputStream (or other invoke places), then set UGI 
in XceiverClientSpi,  extract UGI and get token string in 
ContainerProtocolCalls method.

2. Make UGI as a thread local field in ContainerProtocolCalls, and then set UGI 
in ChunkInputStream or other similar places.

#1 is a more generic approach, UGI stored in XceiverClientSpi can be reused in 
other places.
  

> Read is slow due to the frequent usage of UGI.getCurrentUserCall()
> --
>
> Key: HDDS-4285
> URL: https://issues.apache.org/jira/browse/HDDS-4285
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Attachments: image-2020-09-28-16-19-17-581.png, 
> profile-20200928-161631-180518.svg
>
>
> Ozone read operation turned out to be slow mainly because we do a new 
> UGI.getCurrentUser for block token for each of the calls.
> We need to cache the block token / UGI.getCurrentUserCall() to make it faster.
>  !image-2020-09-28-16-19-17-581.png! 
> To reproduce:
> Checkout: https://github.com/elek/hadoop-ozone/tree/mocked-read
> {code}
> cd hadoop-ozone/client
> export 
> MAVEN_OPTS=-agentpath:/home/elek/prog/async-profiler/build/libasyncProfiler.so=start,file=/tmp/profile-%t-%p.svg
> mvn compile exec:java 
> -Dexec.mainClass=org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit 
> -Dexec.classpathScope=test
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4285) Read is slow due to the frequent usage of UGI.getCurrentUserCall()

2020-09-28 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203321#comment-17203321
 ] 

Yiqun Lin edited comment on HDDS-4285 at 9/28/20, 3:56 PM:
---

Looking into this, I am thinking of two approaches for this:

1. Initialize UGI instance in ChunkInputStream (or other invoke places), then 
set UGI in XceiverClientSpi,  extract UGI and get token string in 
ContainerProtocolCalls method.

2. Make UGI as a thread local field in ContainerProtocolCalls, and then set UGI 
in ChunkInputStream or other similar places.

#1 is a more generic approach, UGI stored in XceiverClientSpi can be reused in 
other places.
  


was (Author: linyiqun):
Looking into this, I am thinking of two approaches for this:

1. Get UGI instance in ChunkInputStream (or other invoke places), then set UGI 
in XceiverClientSpi,  extract UGI and get token string in 
ContainerProtocolCalls method.

2. Make UGI as a thread local field in ContainerProtocolCalls, and then set UGI 
in ChunkInputStream or other similar places.

#1 is a more generic approach, UGI stored in XceiverClientSpi can be reused in 
other places.
  

> Read is slow due to the frequent usage of UGI.getCurrentUserCall()
> --
>
> Key: HDDS-4285
> URL: https://issues.apache.org/jira/browse/HDDS-4285
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Attachments: image-2020-09-28-16-19-17-581.png, 
> profile-20200928-161631-180518.svg
>
>
> Ozone read operation turned out to be slow mainly because we do a new 
> UGI.getCurrentUser for block token for each of the calls.
> We need to cache the block token / UGI.getCurrentUserCall() to make it faster.
>  !image-2020-09-28-16-19-17-581.png! 
> To reproduce:
> Checkout: https://github.com/elek/hadoop-ozone/tree/mocked-read
> {code}
> cd hadoop-ozone/client
> export 
> MAVEN_OPTS=-agentpath:/home/elek/prog/async-profiler/build/libasyncProfiler.so=start,file=/tmp/profile-%t-%p.svg
> mvn compile exec:java 
> -Dexec.mainClass=org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit 
> -Dexec.classpathScope=test
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4285) Read is slow due to the frequent usage of UGI.getCurrentUserCall()

2020-09-28 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203321#comment-17203321
 ] 

Yiqun Lin edited comment on HDDS-4285 at 9/28/20, 3:58 PM:
---

Looking into this, I am thinking of two approaches for this:

1. Initialize UGI instance in ChunkInputStream (or other invoke places), then 
set UGI in XceiverClientSpi,  extract UGI and get token string in 
ContainerProtocolCalls method.

2. Make UGI as a thread local field in ContainerProtocolCalls, and then reset 
ContainerProtocolCalls#UGI in ChunkInputStream or other places.

#1 is a more generic approach, UGI stored in XceiverClientSpi can be reused in 
other places.
  


was (Author: linyiqun):
Looking into this, I am thinking of two approaches for this:

1. Initialize UGI instance in ChunkInputStream (or other invoke places), then 
set UGI in XceiverClientSpi,  extract UGI and get token string in 
ContainerProtocolCalls method.

2. Make UGI as a thread local field in ContainerProtocolCalls, and then set UGI 
in ChunkInputStream or other similar places.

#1 is a more generic approach, UGI stored in XceiverClientSpi can be reused in 
other places.
  

> Read is slow due to the frequent usage of UGI.getCurrentUserCall()
> --
>
> Key: HDDS-4285
> URL: https://issues.apache.org/jira/browse/HDDS-4285
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Attachments: image-2020-09-28-16-19-17-581.png, 
> profile-20200928-161631-180518.svg
>
>
> Ozone read operation turned out to be slow mainly because we do a new 
> UGI.getCurrentUser for block token for each of the calls.
> We need to cache the block token / UGI.getCurrentUserCall() to make it faster.
>  !image-2020-09-28-16-19-17-581.png! 
> To reproduce:
> Checkout: https://github.com/elek/hadoop-ozone/tree/mocked-read
> {code}
> cd hadoop-ozone/client
> export 
> MAVEN_OPTS=-agentpath:/home/elek/prog/async-profiler/build/libasyncProfiler.so=start,file=/tmp/profile-%t-%p.svg
> mvn compile exec:java 
> -Dexec.mainClass=org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit 
> -Dexec.classpathScope=test
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on pull request #1445: HDDS-4272. Volume namespace: add usedNamespace and update it when create and delete bucket

2020-09-28 Thread GitBox


amaliujia commented on pull request #1445:
URL: https://github.com/apache/hadoop-ozone/pull/1445#issuecomment-700130504


   @cxorm can you take another look please? Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3698) Ozone Non-Rolling upgrades

2020-09-28 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-3698:

Status: Patch Available  (was: Open)

> Ozone Non-Rolling upgrades
> --
>
> Key: HDDS-3698
> URL: https://issues.apache.org/jira/browse/HDDS-3698
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: Ozone Non-Rolling Upgrades (Presentation).pdf, Ozone 
> Non-Rolling Upgrades Doc v1.1.pdf, Ozone Non-Rolling Upgrades.pdf
>
>
> Support for Non-rolling upgrades in Ozone.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3698) Ozone Non-Rolling upgrades

2020-09-28 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-3698:

Status: In Progress  (was: Patch Available)

> Ozone Non-Rolling upgrades
> --
>
> Key: HDDS-3698
> URL: https://issues.apache.org/jira/browse/HDDS-3698
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: Ozone Non-Rolling Upgrades (Presentation).pdf, Ozone 
> Non-Rolling Upgrades Doc v1.1.pdf, Ozone Non-Rolling Upgrades.pdf
>
>
> Support for Non-rolling upgrades in Ozone.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4227) Implement a "prepareForUpgrade" step that applies all committed transactions onto the OM state machine.

2020-09-28 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-4227:

Status: Patch Available  (was: In Progress)

> Implement a "prepareForUpgrade" step that applies all committed transactions 
> onto the OM state machine.
> ---
>
> Key: HDDS-4227
> URL: https://issues.apache.org/jira/browse/HDDS-4227
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> *Why is this needed?*
> Through HDDS-4143, we have a generic factory to handle multiple versions of 
> apply transaction implementations based on layout version. Hence, this 
> factory can be used to handle versioned requests across layout versions, 
> whenever both the versions need to exist in the code (Let's say for 
> HDDS-2939). 
> However, it has been noticed that the OM ratis requests are still undergoing 
> lot of minor changes (HDDS-4007, HDDS-4007, HDDS-3903), and in these cases it 
> will become hard to maintain 2 versions of the code just to support clean 
> upgrades. 
> Hence, the plan is to build a pre-upgrade utility (client API) that makes 
> sure that an OM instance has no "un-applied" transactions in this Raft log. 
> Invoking this client API makes sure that the upgrade starts with a clean 
> state. Of course, this would be needed only in a HA setup. In a non HA setup, 
> this can either be skipped, or when invoked will be a No-Op (Non Ratis) or 
> cause no harm (Single node Ratis).
> *How does it work?*
> Before updating the software bits, our goal is to get OMs to get to the  
> latest state with respect to apply transaction. The reason we want this is to 
> make sure that the same version of the code executes the AT step in all the 3 
> OMs. In a high level, the flow will be as follows.
> * Before upgrade, *stop* the OMs.
> * Start OMs with a special flag --prepareUpgrade (This is something like 
> --init,  which is a special state which stops the ephemeral OM instance after 
> doing some work)
> * When OM is started with the --prepareUpgrade flag, it does not start the 
> RPC server, so no new requests can get in.
> * In this state, we give every OM time to apply txn until the last txn.
> * We know that at least 2 OMs would have gotten the last client request 
> transaction committed into their log. Hence, those 2 OMs are expected to 
> apply transaction to that index faster.
> * At every OM, the Raft log will be purged after this wait period (so that 
> the replay does not happen), and a Ratis snapshot taken at last txn.
> * Even if there is a lagger OM which is unable to get to last applied txn 
> index, its logs will be purged after the wait time expires.
> * Now when OMs are started with newer version, all the OMs will start using 
> the new code.
> * The lagger OM will get the new Ratis snapshot since there are no logs to 
> replay from.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3810) Add the logic to distribute open containers among the pipelines of a datanode

2020-09-28 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-3810:
--
Status: Patch Available  (was: Open)

> Add the logic to distribute open containers among the pipelines of a datanode
> -
>
> Key: HDDS-3810
> URL: https://issues.apache.org/jira/browse/HDDS-3810
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>
> A datanode can participate in multiple pipelines based on no of raft log 
> disks as well the disk type. SCM should make the distribution of open 
> containers among these set of pipelines evenly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #1274: HDDS-3810. Add the logic to distribute open containers among the pipelines of a datanode.

2020-09-28 Thread GitBox


nandakumar131 commented on a change in pull request #1274:
URL: https://github.com/apache/hadoop-ozone/pull/1274#discussion_r496111822



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
##
@@ -121,6 +143,19 @@ public int getHealthyVolumeCount() {
 }
   }
 
+  /**
+   * Returns count of healthy raft log volumes reported from datanode.
+   * @return count of healthy raft log volumes
+   */
+  public int getRaftLogVolumeCount() {

Review comment:
   Rename suggestion: `getMetadataVolumeCount()`

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
##
@@ -101,12 +103,23 @@ public SCMContainerManager(
 this.numContainerPerVolume = conf
 .getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT,
 ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT_DEFAULT);
+this.numPipelinesPerRaftLogDisk = conf
+.getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK,
+ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK_DEFAULT);
 
 loadExistingContainers();
 
 scmContainerManagerMetrics = SCMContainerManagerMetrics.create();
   }
 
+  private int getOpenContainerCountPerPipeline(Pipeline pipeline) {
+int totalContainerCountPerDn = numContainerPerVolume *
+pipelineManager.getNumHealthyVolumes(pipeline);
+int maxPipelineCountPerDn = pipelineManager.maxPipelineLimit(pipeline);
+return (int) Math.ceil(
+((double) totalContainerCountPerDn / maxPipelineCountPerDn));
+  }
+

Review comment:
   Will this work in case of heterogeneous datanodes, where one datanode 
has 1 Raft log disk with 2 data disk and the other datanode has 5 Raft log disk 
with 10 data disk?
   
   According to the current logic `getOpenContainerCountPerPipeline` will 
return 10, if `numContainerPerVolume` and `numPipelinesPerRaftLogDisk` are set 
to 2.
   
   
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4182) Onboard HDDS-3869 into Layout version management

2020-09-28 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-4182:

Description: 
In HDDS-3869 (Use different column families for datanode block and metadata),  
there was a backward compatible change made in the Ozone datanode RocksDB. This 
JIRA tracks the effort to use a "Layout Version" to track this change such that 
it is NOT used before finalizing the cluster.

cc [~erose], [~hanishakoneru]

  was:
In HDDS-3869 (Use different column families for datanode block and metadata),  
there was a backward compatible change made in the Ozone datanode RocksDB. This 
JIRA tracks the effort to use a "Layout Version" to track this change such that 
it is NOT used before finalizing the cluster.

cc [~erose], [~hkoneru]


> Onboard HDDS-3869 into Layout version management
> 
>
> Key: HDDS-4182
> URL: https://issues.apache.org/jira/browse/HDDS-4182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Priority: Major
>
> In HDDS-3869 (Use different column families for datanode block and metadata), 
>  there was a backward compatible change made in the Ozone datanode RocksDB. 
> This JIRA tracks the effort to use a "Layout Version" to track this change 
> such that it is NOT used before finalizing the cluster.
> cc [~erose], [~hanishakoneru]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on pull request #1414: HDDS-4231. Background Service blocks on task results.

2020-09-28 Thread GitBox


nandakumar131 commented on pull request #1414:
URL: https://github.com/apache/hadoop-ozone/pull/1414#issuecomment-700200428


   Overall the patch looks good to me.
   
   A minor suggestion, feel free to ignore it.
   Instead of having a generic reference in `BackgroundTaskQueue` (which 
actually represents the type of result returned by the task in the queue, not 
the item in the queue), the `BackgroundTask` can be modified to 
`BackgroundTask`. This will be more explicit.
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4231) Background Service blocks on task results

2020-09-28 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-4231:
--
Status: Patch Available  (was: Open)

> Background Service blocks on task results
> -
>
> Key: HDDS-4231
> URL: https://issues.apache.org/jira/browse/HDDS-4231
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
>
> Background service currently waits on the results of the tasks. The idea is 
> to track the time it took for the task to execute and log if task takes more 
> than configured timeout.
> This does not require waiting on the task results and can be achieved by just 
> comparing the execution time of a task with the timeout value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #1378: HDDS-4133. Use new ContainerManager in SCM.

2020-09-28 Thread GitBox


nandakumar131 commented on a change in pull request #1378:
URL: https://github.com/apache/hadoop-ozone/pull/1378#discussion_r496149425



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerID.java
##
@@ -41,8 +41,8 @@
* @param id int
*/
   private ContainerID(long id) {
-Preconditions.checkState(id > 0,
-"Container ID should be a positive. %s.", id);
+Preconditions.checkState(id >= 0,

Review comment:
   We don't create a container with ID 0, but 0 is a valid ID.
   The reason for allowing 0 as a value for container ID is to avoid the 
explicit null check that we do in HDDS-1302.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx merged pull request #1432: HDDS-4252. Add the current layout versions to DN - SCM proto payload.

2020-09-28 Thread GitBox


avijayanhwx merged pull request #1432:
URL: https://github.com/apache/hadoop-ozone/pull/1432


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4252) Add the current layout versions to DN - SCM proto payload.

2020-09-28 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-4252:

Fix Version/s: 1.1.0

> Add the current layout versions to DN - SCM proto payload.
> --
>
> Key: HDDS-4252
> URL: https://issues.apache.org/jira/browse/HDDS-4252
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Prashant Pogde
>Assignee: Prashant Pogde
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4252) Add the current layout versions to DN - SCM proto payload.

2020-09-28 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan resolved HDDS-4252.
-
Resolution: Fixed

PR Merged. 

[~ppogde] Can you add some description to this JIRA?

> Add the current layout versions to DN - SCM proto payload.
> --
>
> Key: HDDS-4252
> URL: https://issues.apache.org/jira/browse/HDDS-4252
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Prashant Pogde
>Assignee: Prashant Pogde
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #1432: HDDS-4252. Add the current layout versions to DN - SCM proto payload.

2020-09-28 Thread GitBox


avijayanhwx commented on pull request #1432:
URL: https://github.com/apache/hadoop-ozone/pull/1432#issuecomment-700205929


   Since failure is unrelated, and this work is going on in a branch, I am 
merging these changes. Thanks for the review @linyiqun.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek merged pull request #1328: HDDS-4102. Normalize Keypath for lookupKey.

2020-09-28 Thread GitBox


elek merged pull request #1328:
URL: https://github.com/apache/hadoop-ozone/pull/1328


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4102) Normalize Keypath for lookupKey

2020-09-28 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4102:
--
Target Version/s: 1.1.0
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> Normalize Keypath for lookupKey
> ---
>
> Key: HDDS-4102
> URL: https://issues.apache.org/jira/browse/HDDS-4102
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> When ozone.om.enable.filesystem.paths, OM normalizes path, and stores the 
> Keyname.
> Now when user tries to read the file from S3 using the keyName which user has 
> used to create the Key, it will return error KEY_NOT_FOUND
> The issue is, lookupKey need to normalize path, when 
> ozone.om.enable.filesystem.paths is enabled. This is common API used by 
> S3/FS. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #1378: HDDS-4133. Use new ContainerManager in SCM.

2020-09-28 Thread GitBox


nandakumar131 commented on a change in pull request #1378:
URL: https://github.com/apache/hadoop-ozone/pull/1378#discussion_r496151525



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerManagerImpl.java
##
@@ -242,19 +286,82 @@ public void removeContainerReplica(final ContainerID id,
   @Override
   public void updateDeleteTransactionId(
   final Map deleteTransactionMap) throws IOException {
-throw new UnsupportedOperationException("Not yet implemented!");
+lock.writeLock().lock();
+try {
+  containerStateManager.updateDeleteTransactionId(deleteTransactionMap);
+} finally {
+  lock.writeLock().unlock();
+}
   }
 
   @Override
   public ContainerInfo getMatchingContainer(final long size, final String 
owner,
-  final Pipeline pipeline, final List excludedContainerIDS) {
-throw new UnsupportedOperationException("Not yet implemented!");
+  final Pipeline pipeline, final Set excludedContainerIDs) {
+NavigableSet containerIDs;
+ContainerInfo containerInfo;
+try {
+  synchronized (pipeline.getId()) {

Review comment:
   This is to avoid the warning that we get when we synchronize on the 
method parameter. This is just to fool the IDE.
   
   In general, synchronizing on method parameter is bad. We have to fix this in 
a better way.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on pull request #1298: HDDS-3869. Use different column families for datanode block and metadata

2020-09-28 Thread GitBox


hanishakoneru commented on pull request #1298:
URL: https://github.com/apache/hadoop-ozone/pull/1298#issuecomment-700258852


   +1 pending CI.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-28 Thread GitBox


bharatviswa504 commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496145730



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
##
@@ -129,6 +134,131 @@ public static OMPathInfo verifyFilesInPath(
 return new OMPathInfo(missing, OMDirectoryResult.NONE, inheritAcls);
   }
 
+  /**
+   * Verify any dir/key exist in the given path in the specified
+   * volume/bucket by iterating through directory table.
+   *
+   * @param omMetadataManager OM Metadata manager
+   * @param volumeNamevolume name
+   * @param bucketNamebucket name
+   * @param keyName   key name
+   * @param keyPath   path
+   * @return OMPathInfoV1 path info object
+   * @throws IOException on DB failure
+   */
+  public static OMPathInfoV1 verifyDirectoryKeysInPath(
+  @Nonnull OMMetadataManager omMetadataManager,
+  @Nonnull String volumeName,
+  @Nonnull String bucketName, @Nonnull String keyName,
+  @Nonnull Path keyPath) throws IOException {
+
+String leafNodeName = OzoneFSUtils.getFileName(keyName);
+List missing = new ArrayList<>();
+List inheritAcls = new ArrayList<>();
+OMDirectoryResult result = OMDirectoryResult.NONE;
+
+Iterator elements = keyPath.iterator();
+// TODO: volume id and bucket id generation logic.
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+OmBucketInfo omBucketInfo =
+omMetadataManager.getBucketTable().get(bucketKey);
+inheritAcls = omBucketInfo.getAcls();
+long lastKnownParentId = omBucketInfo.getObjectID();
+OmDirectoryInfo parentDirInfo = null;
+String dbDirName = ""; // absolute path for trace logs
+// for better logging
+StringBuilder fullKeyPath = new StringBuilder(bucketKey);
+while (elements.hasNext()) {
+  String fileName = elements.next().toString();
+  fullKeyPath.append(OzoneConsts.OM_KEY_PREFIX);
+  fullKeyPath.append(fileName);
+  if (missing.size() > 0) {
+// Add all the sub-dirs to the missing list except the leaf element.
+// For example, /vol1/buck1/a/b/c/d/e/f/file1.txt.
+// Assume /vol1/buck1/a/b/c exists, then add d, e, f into missing list.
+if(elements.hasNext()){
+  // skips leaf node.
+  missing.add(fileName);
+}
+continue;
+  }
+
+  // For example, /vol1/buck1/a/b/c/d/e/f/file1.txt
+  // 1. Do lookup on directoryTable. If not exists goto next step.
+  // 2. Do look on keyTable. If not exists goto next step.
+  // 3. Add 'sub-dir' to missing parents list
+  String dbNodeName = omMetadataManager.getOzonePathKey(
+  lastKnownParentId, fileName);
+  OmDirectoryInfo omDirInfo = omMetadataManager.getDirectoryTable().
+  get(dbNodeName);
+  if (omDirInfo != null) {
+dbDirName += omDirInfo.getName() + OzoneConsts.OZONE_URI_DELIMITER;
+if (elements.hasNext()) {
+  result = OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+  lastKnownParentId = omDirInfo.getObjectID();
+  inheritAcls = omDirInfo.getAcls();
+  continue;
+} else {
+  // Checked all the sub-dirs till the leaf node.
+  // Found a directory in the given path.
+  result = OMDirectoryResult.DIRECTORY_EXISTS;
+}
+  } else {
+// Get parentID from the lastKnownParent. For any files, directly under
+// the bucket, the parent is the bucketID. Say, "/vol1/buck1/file1"
+// TODO: Need to add UT for this case along with OMFileCreateRequest.
+if (omMetadataManager.getKeyTable().isExist(dbNodeName)) {
+  if (elements.hasNext()) {
+// Found a file in the given key name.
+result = OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+  } else {
+// Checked all the sub-dirs till the leaf file.
+// Found a file with the given key name.
+result = OMDirectoryResult.FILE_EXISTS;
+  }
+  break; // Skip directory traversal as it hits key.
+}
+
+// Add to missing list, there is no such file/directory with given 
name.
+if (elements.hasNext()) {
+  missing.add(fileName);
+}
+
+String dbDirKeyName = omMetadataManager.getOzoneDirKey(volumeName,
+bucketName, dbDirName);
+LOG.trace("Acls inherited from parent " + dbDirKeyName + " are : "
++ inheritAcls);
+  }
+}
+
+if (result == OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH) {

Review comment:
   Now we don't really need this, as now logic checks from parent to leaf, 
as previous logic checks from leaf to parent DIRECTORY_EXISTS_IN_GIVENPATH 
makes sense, so we don't really need this special handling.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/or

[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-28 Thread GitBox


bharatviswa504 commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496246400



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequestV1.java
##
@@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import com.google.common.base.Optional;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.file.OMDirectoryCreateResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateDirectoryRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateDirectoryResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.Status;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.FILE_ALREADY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.INVALID_KEY_NAME;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.*;
+
+/**
+ * Handle create directory request. It will add path components to the 
directory
+ * table and maintains file system semantics.
+ */
+public class OMDirectoryCreateRequestV1 extends OMDirectoryCreateRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMDirectoryCreateRequestV1.class);
+
+  public OMDirectoryCreateRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CreateDirectoryRequest createDirectoryRequest = getOmRequest()
+.getCreateDirectoryRequest();
+KeyArgs keyArgs = createDirectoryRequest.getKeyArgs();
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+String keyName = keyArgs.getKeyName();
+
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+
omResponse.setCreateDirectoryResponse(CreateDirectoryResponse.newBuilder());
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumCreateDirectory();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildKeyArgsAuditMap(keyArgs);
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+boolean acquiredLock = false;
+IOException exception = null;
+OMClientResp

[GitHub] [hadoop-ozone] fapifta commented on pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-28 Thread GitBox


fapifta commented on pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#issuecomment-700347698


   Hi @avijayanhwx 
   
   sorry for the long silence, the changes look good to me with the follow up 
items, +1.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4286) Implement one-by-one LayoutFeature finalization in OMUpgradeFinalizer

2020-09-28 Thread Jira
István Fajth created HDDS-4286:
--

 Summary: Implement one-by-one LayoutFeature finalization in 
OMUpgradeFinalizer
 Key: HDDS-4286
 URL: https://issues.apache.org/jira/browse/HDDS-4286
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: István Fajth


The initial implementation does the finalization of LayoutFeatures at once 
inside the state machine, as this is being done while the system is up and 
running, and as this one may take a longer time to execute, we need to explore 
a way to do this.
This JIRA is to follow up on that one.

Possible implementation that is under consideration for this is to post a ratis 
request internally in Ozone Manager to finalize a layout feature.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-28 Thread GitBox


swagle commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r496341141



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisUpgradeUtils.java
##
@@ -0,0 +1,96 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.hdds.ratis;
+
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.ratis.protocol.RaftGroup;
+import org.apache.ratis.server.impl.RaftServerImpl;
+import org.apache.ratis.server.impl.RaftServerProxy;
+import org.apache.ratis.statemachine.StateMachine;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Ratis utility functions.
+ */
+public final class RatisUpgradeUtils {
+
+  private RatisUpgradeUtils() {
+  }
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(RatisUpgradeUtils.class);
+
+  /**
+   * Flush all committed transactions in a given Raft Server for a given group.
+   * @param stateMachine state machine to use
+   * @param raftGroup raft group
+   * @param server Raft server proxy instance.
+   * @param maxTimeToWaitSeconds Max time to wait before declaring failure.
+   * @throws InterruptedException when interrupted
+   * @throws IOException on error while waiting
+   */
+  public static void waitForAllTxnsApplied(
+  StateMachine stateMachine,
+  RaftGroup raftGroup,
+  RaftServerProxy server,
+  long maxTimeToWaitSeconds,
+  long timeBetweenRetryInSeconds)
+  throws InterruptedException, IOException {
+
+long intervalTime = TimeUnit.SECONDS.toMillis(timeBetweenRetryInSeconds);
+long endTime = System.currentTimeMillis() +
+TimeUnit.SECONDS.toMillis(maxTimeToWaitSeconds);
+boolean success = false;
+while (System.currentTimeMillis() < endTime) {

Review comment:
   Wouldn't this always be true? [ curr < curr + num ]





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4287) Exclude protobuff classes from ozone-filesystem-hadoop3 jars

2020-09-28 Thread Uma Maheswara Rao G (Jira)
Uma Maheswara Rao G created HDDS-4287:
-

 Summary: Exclude protobuff classes from ozone-filesystem-hadoop3 
jars
 Key: HDDS-4287
 URL: https://issues.apache.org/jira/browse/HDDS-4287
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


Currently Ozone-filesystem-hadoop3 jar including protobuff classes. We are 
already keeping the dependency on hadoop jars a prerequisite condition. And 
hadoop will get the protobuf classes along with it's jars. So, getting 
protobuff jars again with Ozone-filesystem-hadoop3 jar would be just 
duplication. So, we can exclude that prootobuff classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi merged pull request #1434: HDDS-3727. Volume space: check quotaUsageInBytes when write key.

2020-09-28 Thread GitBox


ChenSammi merged pull request #1434:
URL: https://github.com/apache/hadoop-ozone/pull/1434


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1434: HDDS-3727. Volume space: check quotaUsageInBytes when write key.

2020-09-28 Thread GitBox


ChenSammi commented on pull request #1434:
URL: https://github.com/apache/hadoop-ozone/pull/1434#issuecomment-700401426


   LGTM +1. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng commented on a change in pull request #1274: HDDS-3810. Add the logic to distribute open containers among the pipelines of a datanode.

2020-09-28 Thread GitBox


GlenGeng commented on a change in pull request #1274:
URL: https://github.com/apache/hadoop-ozone/pull/1274#discussion_r496348325



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
##
@@ -40,6 +43,8 @@
   private long lastStatsUpdatedTime;
 
   private List storageReports;
+  private List 
reports) {
 }
   }
 
+  /**
+   * Updates the datanode storage reports.

Review comment:
   stale java doc, suggestion: Updates the datanode metadata storage 
reports.

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
##
@@ -530,6 +540,43 @@ public int getNumHealthyVolumes(List 
dnList) {
 return Collections.max(volumeCountList);
   }
 
+  /**
+   * Returns the pipeline limit for the datanode.
+   * if the datanode pipeline limit is set, consider that as the max
+   * pipeline limit.
+   * In case, the pipeline limit is not set, the max pipeline limit
+   * will be based on the no of raft log volume reported and provided
+   * that it has atleast one healthy data volume.
+   */
+  @Override
+  public int maxPipelineLimit(DatanodeDetails dn) {
+try {
+  if (heavyNodeCriteria > 0) {
+return heavyNodeCriteria;
+  } else if (nodeStateManager.getNode(dn).getHealthyVolumeCount() > 0) {
+return numPipelinesPerRaftLogDisk *
+nodeStateManager.getNode(dn).getRaftLogVolumeCount();
+  }
+} catch (NodeNotFoundException e) {
+  LOG.warn("Cannot generate NodeStat, datanode {} not found.",
+  dn.getUuid());
+}
+return 0;
+  }
+
+  /**
+   * Returns the pipeline limit for set of datanodes.
+   */
+  @Override
+  public int maxPipelineLimit(List dnList) {

Review comment:
   method name is `maxPipelineLimit`, but the logic calculates the min, 
which is a little bit weird. How about `minPipelineLimit `

##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
##
@@ -308,6 +308,10 @@
   OZONE_SCM_KEY_VALUE_CONTAINER_DELETION_CHOOSING_POLICY =
   "ozone.scm.keyvalue.container.deletion-choosing.policy";
 
+  public static final String OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK =

Review comment:
   How about `OZONE_SCM_PIPELINE_PER_METADATA_DISK` ?  
   There are both raft log disk and meta data storage report existing in the 
code context, they are similar to each other, bring in some redundancy.
   BTW, the former one may lead misunderstanding. One might think that each 
raft log disk contains one raft log, which is straightforwad, nevertheless, the 
raft log disk and the raft log is a OneToMany relationship.

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
##
@@ -101,12 +103,23 @@ public SCMContainerManager(
 this.numContainerPerVolume = conf
 .getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT,
 ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT_DEFAULT);
+this.numPipelinesPerRaftLogDisk = conf
+.getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK,
+ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK_DEFAULT);
 
 loadExistingContainers();
 
 scmContainerManagerMetrics = SCMContainerManagerMetrics.create();
   }
 
+  private int getOpenContainerCountPerPipeline(Pipeline pipeline) {
+int totalContainerCountPerDn = numContainerPerVolume *
+pipelineManager.getNumHealthyVolumes(pipeline);

Review comment:
   `pipelineManager.getMaxHealthyVolumeNum(pipeline) / 
pipelineManager.getMinPipelineLimit(pipeline)` may be more expressive.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng commented on a change in pull request #1274: HDDS-3810. Add the logic to distribute open containers among the pipelines of a datanode.

2020-09-28 Thread GitBox


GlenGeng commented on a change in pull request #1274:
URL: https://github.com/apache/hadoop-ozone/pull/1274#discussion_r496361400



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
##
@@ -101,12 +103,23 @@ public SCMContainerManager(
 this.numContainerPerVolume = conf
 .getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT,
 ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT_DEFAULT);
+this.numPipelinesPerRaftLogDisk = conf
+.getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK,
+ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK_DEFAULT);
 
 loadExistingContainers();
 
 scmContainerManagerMetrics = SCMContainerManagerMetrics.create();
   }
 
+  private int getOpenContainerCountPerPipeline(Pipeline pipeline) {
+int totalContainerCountPerDn = numContainerPerVolume *
+pipelineManager.getNumHealthyVolumes(pipeline);
+int maxPipelineCountPerDn = pipelineManager.maxPipelineLimit(pipeline);
+return (int) Math.ceil(
+((double) totalContainerCountPerDn / maxPipelineCountPerDn));
+  }
+

Review comment:
   might need change pipeline placement policy to make sure that only 
allocating pipelines on homogeneous machiens.
   
   It one pipeline connects a strong DN and weak DN, its open container number 
will be `volumeNumOfStrongDN * 3 / pipelineNumOfWeakDN.`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1338: HDDS-4023. Delete closed container after all blocks have been deleted.

2020-09-28 Thread GitBox


ChenSammi commented on pull request #1338:
URL: https://github.com/apache/hadoop-ozone/pull/1338#issuecomment-700409940


   > 
   > 
   > This change looks almost good now. I wonder about two final things:
   > 
   > 1. In `updateContainerStats(...)` do you think we should return if the 
container is DELETING or DELETED without making any updates? If this is a stale 
replica, they it may not be empty and hence could update the container stats 
incorrectly. If the Container is already in DELETING or DELETED state, then we 
can ignore any changes to it, as we know the stale replica will get removed 
anyway.
   > 
   > 2. I am wondering if we could receive a stale replica when the 
container is DELETING. Then a replica would get added in 
`updateContainerReplica(...)`. Then later the container will go to DELETED and 
the state replica will get reported again - at that point we will send a delete 
command, but the replica will never get removed from memory now I think. Would 
it make sense to send the delete command for any replicas received when the 
container is DELETING or DELETED?
   
   
   
   > 
   > 
   > This change looks almost good now. I wonder about two final things:
   > 
   > 1. In `updateContainerStats(...)` do you think we should return if the 
container is DELETING or DELETED without making any updates? If this is a stale 
replica, they it may not be empty and hence could update the container stats 
incorrectly. If the Container is already in DELETING or DELETED state, then we 
can ignore any changes to it, as we know the stale replica will get removed 
anyway.
   > 
   > 2. I am wondering if we could receive a stale replica when the 
container is DELETING. Then a replica would get added in 
`updateContainerReplica(...)`. Then later the container will go to DELETED and 
the state replica will get reported again - at that point we will send a delete 
command, but the replica will never get removed from memory now I think. Would 
it make sense to send the delete command for any replicas received when the 
container is DELETING or DELETED?
   
   There is following logic in ReplicatioManager, which will handle the 
replicas reported during container state is DELETING. 
   So we only need to send delete replica command for DELETED container during 
container report proceess, and let the ReplicationManager handle the DELETING 
container replica, because when a container in in DELETING state, It's sure 
that scm send out some replica deletion commands,  but we cannot tell replica 
stale or not in container report. 
   
   
if (state == LifeCycleState.DELETING) {
   handleContainerUnderDelete(container, replicas);
   return;
 }
   
   @sodonnel 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #1451: Hdds 4117

2020-09-28 Thread GitBox


bharatviswa504 opened a new pull request #1451:
URL: https://github.com/apache/hadoop-ozone/pull/1451


   ## What changes were proposed in this pull request?
   
   Normalize Keypath for listKeys.
   When ozone.om.enable.filesystem.paths, OM normalizes path, and stores the 
Keyname in the OM DB KeyTable.
   
   When listKeys uses given keyName(not normalized key path) as prefix and 
Starkey the list-keys will return an empty results.
   
   Similar to HDDS-4102, we should normalize Starkey and keyPrefix.
   
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4117
   
   ## How was this patch tested?
   
   Added a test.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-28 Thread GitBox


avijayanhwx commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r496366104



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/ratis/RatisUpgradeUtils.java
##
@@ -0,0 +1,96 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.hdds.ratis;
+
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.ratis.protocol.RaftGroup;
+import org.apache.ratis.server.impl.RaftServerImpl;
+import org.apache.ratis.server.impl.RaftServerProxy;
+import org.apache.ratis.statemachine.StateMachine;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Ratis utility functions.
+ */
+public final class RatisUpgradeUtils {
+
+  private RatisUpgradeUtils() {
+  }
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(RatisUpgradeUtils.class);
+
+  /**
+   * Flush all committed transactions in a given Raft Server for a given group.
+   * @param stateMachine state machine to use
+   * @param raftGroup raft group
+   * @param server Raft server proxy instance.
+   * @param maxTimeToWaitSeconds Max time to wait before declaring failure.
+   * @throws InterruptedException when interrupted
+   * @throws IOException on error while waiting
+   */
+  public static void waitForAllTxnsApplied(
+  StateMachine stateMachine,
+  RaftGroup raftGroup,
+  RaftServerProxy server,
+  long maxTimeToWaitSeconds,
+  long timeBetweenRetryInSeconds)
+  throws InterruptedException, IOException {
+
+long intervalTime = TimeUnit.SECONDS.toMillis(timeBetweenRetryInSeconds);
+long endTime = System.currentTimeMillis() +
+TimeUnit.SECONDS.toMillis(maxTimeToWaitSeconds);
+boolean success = false;
+while (System.currentTimeMillis() < endTime) {

Review comment:
   The 'curr' in the RHS was assigned before the while loop and hence does 
not change, the LHS 'curr' moves forward.
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng commented on a change in pull request #1274: HDDS-3810. Add the logic to distribute open containers among the pipelines of a datanode.

2020-09-28 Thread GitBox


GlenGeng commented on a change in pull request #1274:
URL: https://github.com/apache/hadoop-ozone/pull/1274#discussion_r496361400



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
##
@@ -101,12 +103,23 @@ public SCMContainerManager(
 this.numContainerPerVolume = conf
 .getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT,
 ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT_DEFAULT);
+this.numPipelinesPerRaftLogDisk = conf
+.getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK,
+ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK_DEFAULT);
 
 loadExistingContainers();
 
 scmContainerManagerMetrics = SCMContainerManagerMetrics.create();
   }
 
+  private int getOpenContainerCountPerPipeline(Pipeline pipeline) {
+int totalContainerCountPerDn = numContainerPerVolume *
+pipelineManager.getNumHealthyVolumes(pipeline);
+int maxPipelineCountPerDn = pipelineManager.maxPipelineLimit(pipeline);
+return (int) Math.ceil(
+((double) totalContainerCountPerDn / maxPipelineCountPerDn));
+  }
+

Review comment:
   might need change pipeline placement policy to make sure that only 
allocating pipelines on homogeneous machiens.
   
   It one pipeline connects a strong DN and weak DN, its open container number 
will be `volumeNumOfStrongDN * 3 / pipelineNumOfWeakDN.`,pipeline on the same 
DN may not be able to evenly distribute open containers of that DN.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on a change in pull request #1274: HDDS-3810. Add the logic to distribute open containers among the pipelines of a datanode.

2020-09-28 Thread GitBox


bshashikant commented on a change in pull request #1274:
URL: https://github.com/apache/hadoop-ozone/pull/1274#discussion_r496394076



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
##
@@ -101,12 +103,23 @@ public SCMContainerManager(
 this.numContainerPerVolume = conf
 .getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT,
 ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT_DEFAULT);
+this.numPipelinesPerRaftLogDisk = conf
+.getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK,
+ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK_DEFAULT);
 
 loadExistingContainers();
 
 scmContainerManagerMetrics = SCMContainerManagerMetrics.create();
   }
 
+  private int getOpenContainerCountPerPipeline(Pipeline pipeline) {
+int totalContainerCountPerDn = numContainerPerVolume *
+pipelineManager.getNumHealthyVolumes(pipeline);
+int maxPipelineCountPerDn = pipelineManager.maxPipelineLimit(pipeline);
+return (int) Math.ceil(
+((double) totalContainerCountPerDn / maxPipelineCountPerDn));
+  }
+

Review comment:
   I agree with @GlenGeng here. The pipeline placement should choose 
homogeneous datanodes. The choice being made here, have many containers open on 
a minimal set of pipelines that we can have out of the set of datanodes in the 
pipelines.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on a change in pull request #1274: HDDS-3810. Add the logic to distribute open containers among the pipelines of a datanode.

2020-09-28 Thread GitBox


bshashikant commented on a change in pull request #1274:
URL: https://github.com/apache/hadoop-ozone/pull/1274#discussion_r496394076



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
##
@@ -101,12 +103,23 @@ public SCMContainerManager(
 this.numContainerPerVolume = conf
 .getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT,
 ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT_DEFAULT);
+this.numPipelinesPerRaftLogDisk = conf
+.getInt(ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK,
+ScmConfigKeys.OZONE_SCM_PIPELINE_PER_RAFT_LOG_DISK_DEFAULT);
 
 loadExistingContainers();
 
 scmContainerManagerMetrics = SCMContainerManagerMetrics.create();
   }
 
+  private int getOpenContainerCountPerPipeline(Pipeline pipeline) {
+int totalContainerCountPerDn = numContainerPerVolume *
+pipelineManager.getNumHealthyVolumes(pipeline);
+int maxPipelineCountPerDn = pipelineManager.maxPipelineLimit(pipeline);
+return (int) Math.ceil(
+((double) totalContainerCountPerDn / maxPipelineCountPerDn));
+  }
+

Review comment:
   I agree with @GlenGeng here. The pipeline placement should choose 
homogeneous datanodes. The choice being made here, have many containers open on 
a minimal set of pipelines that we can have out of the set of datanodes in the 
pipleines.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant edited a comment on pull request #1346: HDDS-4115. CLI command to show current SCM leader and follower status.

2020-09-28 Thread GitBox


bshashikant edited a comment on pull request #1346:
URL: https://github.com/apache/hadoop-ozone/pull/1346#issuecomment-700435789


   I feel the CLI should be common for both OM and SCM and probably extended to 
Datanodes as well.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on pull request #1346: HDDS-4115. CLI command to show current SCM leader and follower status.

2020-09-28 Thread GitBox


bshashikant commented on pull request #1346:
URL: https://github.com/apache/hadoop-ozone/pull/1346#issuecomment-700435789


   The CLI should be generic for both OM and SCM and probably extended to 
Datanodes as well.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on pull request #1346: HDDS-4115. CLI command to show current SCM leader and follower status.

2020-09-28 Thread GitBox


amaliujia commented on pull request #1346:
URL: https://github.com/apache/hadoop-ozone/pull/1346#issuecomment-700441836


   Re @bshashikant 
   
   Agreed. Right now the command itself is unified (for both OM and SCM, we 
name this command as `roles`). Then we should unify the behavior fo both 
commands (and if there is DN command, that should be the same).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-28 Thread GitBox


runzhiwang commented on pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#issuecomment-700470083


   @xiaoyuyao @bshashikant  I have updated the patch. Could you help review it 
again ? Thank you very much.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3727) Volume space: check quotaUsageInBytes when write key

2020-09-28 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao updated HDDS-3727:

Fix Version/s: 1.1.0

> Volume space: check quotaUsageInBytes when write key
> 
>
> Key: HDDS-3727
> URL: https://issues.apache.org/jira/browse/HDDS-3727
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3727) Volume space: check quotaUsageInBytes when write key

2020-09-28 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao resolved HDDS-3727.
-
Release Note: PR has been merged, close this.
  Resolution: Fixed

> Volume space: check quotaUsageInBytes when write key
> 
>
> Key: HDDS-3727
> URL: https://issues.apache.org/jira/browse/HDDS-3727
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-28 Thread GitBox


captainzmc commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r496451487



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
##
@@ -192,6 +192,10 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 throw new OMException("Bucket already exist", BUCKET_ALREADY_EXISTS);
   }
 
+  //Check quotaInBytes and quotaInCounts to update
+  checkQuotaBytesValid(omVolumeArgs, omBucketInfo);
+  checkQuotaCountsValid(omVolumeArgs, omBucketInfo);

Review comment:
   Thanks @adoroszlai for the advice, the check action needs to get the 
volume or bucket in DB, so it better to do this action after acquiring the 
lock.  And other comments has been fixed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng merged pull request #1346: HDDS-4115. CLI command to show current SCM leader and follower status.

2020-09-28 Thread GitBox


timmylicheng merged pull request #1346:
URL: https://github.com/apache/hadoop-ozone/pull/1346


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on pull request #1346: HDDS-4115. CLI command to show current SCM leader and follower status.

2020-09-28 Thread GitBox


timmylicheng commented on pull request #1346:
URL: https://github.com/apache/hadoop-ozone/pull/1346#issuecomment-700491675


   +1. Thanks for Rui's contribution.
   Merging



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4115) CLI command to show current SCM leader and follower status

2020-09-28 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng updated HDDS-4115:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> CLI command to show current SCM leader and follower status
> --
>
> Key: HDDS-4115
> URL: https://issues.apache.org/jira/browse/HDDS-4115
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Li Cheng
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4115) CLI command to show current SCM leader and follower status

2020-09-28 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203727#comment-17203727
 ] 

Li Cheng commented on HDDS-4115:


Patch is merged. Resolving

> CLI command to show current SCM leader and follower status
> --
>
> Key: HDDS-4115
> URL: https://issues.apache.org/jira/browse/HDDS-4115
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Li Cheng
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org