[GitHub] [hadoop-ozone] adoroszlai merged pull request #1250: HDDS-4018. Datanode log spammed by NPE

2020-07-23 Thread GitBox


adoroszlai merged pull request #1250:
URL: https://github.com/apache/hadoop-ozone/pull/1250


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong commented on a change in pull request #1096: HDDS-3833. Use Pipeline choose policy to choose pipeline from exist pipeline list

2020-07-23 Thread GitBox


maobaolong commented on a change in pull request #1096:
URL: https://github.com/apache/hadoop-ozone/pull/1096#discussion_r459858064



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
##
@@ -222,9 +224,7 @@ public AllocatedBlock allocateBlock(final long size, 
ReplicationType type,
   }
 
   if (null == pipeline) {
-// TODO: #CLUTIL Make the selection policy driven.
-pipeline = availablePipelines
-.get((int) (Math.random() * availablePipelines.size()));
+pipeline = pipelineChoosePolicy.choosePipeline(availablePipelines);

Review comment:
   Done.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong commented on a change in pull request #1096: HDDS-3833. Use Pipeline choose policy to choose pipeline from exist pipeline list

2020-07-23 Thread GitBox


maobaolong commented on a change in pull request #1096:
URL: https://github.com/apache/hadoop-ozone/pull/1096#discussion_r459855685



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
##
@@ -287,6 +287,9 @@
   public static final String OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT =
   "ozone.scm.pipeline.owner.container.count";
   public static final int OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT_DEFAULT = 3;
+  // Pipeline choose policy:
+  public static final String OZONE_SCM_PIPELINE_CHOOSE_IMPL_KEY =
+  "ozone.scm.pipeline.choose.impl";

Review comment:
   Done.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on pull request #1247: HDDS-3511. Fix definition of DelegationTokenTable in OmMetadataManagerImpl

2020-07-23 Thread GitBox


cxorm commented on pull request #1247:
URL: https://github.com/apache/hadoop-ozone/pull/1247#issuecomment-663342109


   Thank you @aeioulisa for the work (and all checks have passed), LGTM +1.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


captainzmc commented on pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#issuecomment-663341909


   Thanks for @cxorm 's review. I had revised the review issues. Could you take 
another look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4023) Delete closed container after all blocks have been deleted

2020-07-23 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen reassigned HDDS-4023:


Assignee: Sammi Chen

> Delete closed container after all blocks have been deleted
> --
>
> Key: HDDS-4023
> URL: https://issues.apache.org/jira/browse/HDDS-4023
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>
> One of our use case is customers delete old objects and files regularly.  
> Once the old files are deleted, there are many containers with no user data.  
> The goal of this task is delete all these containers to reduce the metadata 
> footprint of both dn and scm. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3943) Cleanup empty container directory

2020-07-23 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen reassigned HDDS-3943:


Assignee: Sammi Chen

> Cleanup empty container directory
> -
>
> Key: HDDS-3943
> URL: https://issues.apache.org/jira/browse/HDDS-3943
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>
> Here is the log after datanode restart. 
> One thing actullay I'd like to know is why there are empty directories. 
> 2020-07-07 22:53:55,716 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 37711
> 2020-07-07 22:53:55,716 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 37430
> 2020-07-07 22:53:55,716 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 37528
> 2020-07-07 22:53:55,716 [Thread-13] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 105470
> 2020-07-07 22:53:55,716 [Thread-13] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 94705
> 2020-07-07 22:53:55,716 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 70861
> 2020-07-07 22:53:55,716 [Thread-13] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 26296
> 2020-07-07 22:53:55,717 [Thread-13] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 26440
> 2020-07-07 22:53:55,717 [Thread-13] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 13725
> 2020-07-07 22:53:55,717 [Thread-13] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 13639
> 2020-07-07 22:53:55,717 [Thread-13] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 70989
> 2020-07-07 22:53:55,718 [Thread-13] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 7
> 2020-07-07 22:53:55,718 [Thread-13] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 71030
> 2020-07-07 22:53:55,718 [Thread-13] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 9801
> 2020-07-07 22:53:55,718 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 109694
> 2020-07-07 22:53:55,719 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 15166
> 2020-07-07 22:53:55,720 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 79205
> 2020-07-07 22:53:55,720 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 78877
> 2020-07-07 22:53:55,721 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 102541
> 2020-07-07 22:53:55,722 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 94557
> 2020-07-07 22:53:55,723 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 87951
> 2020-07-07 22:53:55,724 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 87865
> 2020-07-07 22:53:55,724 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 87752
> 2020-07-07 22:53:55,724 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 87578
> 2020-07-07 22:53:55,724 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 87935
> 2020-07-07 22:53:55,724 [Thread-5] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 87870
> 2020-07-07 22:53:55,724 [Thread-13] ERROR 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader: Missing 
> .container file for ContainerID: 55222
> 2020-07-07 22:53:55,725 [Thread-5] ERROR 

[jira] [Created] (HDDS-4023) Delete closed container after all blocks have been deleted

2020-07-23 Thread Sammi Chen (Jira)
Sammi Chen created HDDS-4023:


 Summary: Delete closed container after all blocks have been deleted
 Key: HDDS-4023
 URL: https://issues.apache.org/jira/browse/HDDS-4023
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Sammi Chen


One of our use case is customers delete old objects and files regularly.  Once 
the old files are deleted, there are many containers with no user data.  

The goal of this task is delete all these containers to reduce the metadata 
footprint of both dn and scm. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4011) Update S3 related documentation

2020-07-23 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-4011:
-
Target Version/s: 0.6.0

> Update S3 related documentation
> ---
>
> Key: HDDS-4011
> URL: https://issues.apache.org/jira/browse/HDDS-4011
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> HDDS-3993 created volume required for S3G during the OM startup.
> So, remove the step that s3v volume needs to be created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3658) Stop to persist container related pipeline info of each key into OM DB to reduce DB size

2020-07-23 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen resolved HDDS-3658.
--
Resolution: Fixed

> Stop to persist container related pipeline info of each key into OM DB to 
> reduce DB size
> 
>
> Key: HDDS-3658
> URL: https://issues.apache.org/jira/browse/HDDS-3658
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> An investigation result of serilized key size, RATIS with three replica.  
> Following examples are quoted from the output of the "ozone sh key info" 
> command which doesn't show related pipeline information for each key location 
> element. 
> 1.  empty key,  serilized size 113 bytes
> hadoop/bucket/user/root/terasort/10G-input-7/_SUCCESS
> {
>   "volumeName" : "hadoop",
>   "bucketName" : "bucket",
>   "name" : "user/root/terasort/10G-input-7/_SUCCESS",
>   "dataSize" : 0,
>   "creationTime" : "2019-11-21T13:53:11.330Z",
>   "modificationTime" : "2019-11-21T13:53:11.361Z",
>   "replicationType" : "RATIS",
>   "replicationFactor" : 3,
>   "ozoneKeyLocations" : [ ],
>   "metadata" : { },
>   "fileEncryptionInfo" : null
> }
> 2.  key with one chunk data, serilized size 661 bytes
> hadoop/bucket/user/root/terasort/10G-input-6/part-m-00037
> {
>   "volumeName" : "hadoop",
>   "bucketName" : "bucket",
>   "name" : "user/root/terasort/10G-input-6/part-m-00037",
>   "dataSize" : 223696200,
>   "creationTime" : "2019-11-18T07:47:58.254Z",
>   "modificationTime" : "2019-11-18T07:53:52.066Z",
>   "replicationType" : "RATIS",
>   "replicationFactor" : 3,
>   "ozoneKeyLocations" : [ {
> "containerID" : 7,
> "localID" : 103157811003588713,
> "length" : 223696200,
> "offset" : 0
>   } ],
>   "metadata" : { },
>   "fileEncryptionInfo" : null
> }
> 3. key with two chunk data, serilized size 1205 bytes,
> ozone sh key info hadoop/bucket/user/root/terasort/10G-input-7/part-m-00027
> {
>   "volumeName" : "hadoop",
>   "bucketName" : "bucket",
>   "name" : "user/root/terasort/10G-input-7/part-m-00027",
>   "dataSize" : 223696200,
>   "creationTime" : "2019-11-21T13:47:07.653Z",
>   "modificationTime" : "2019-11-21T13:53:07.964Z",
>   "replicationType" : "RATIS",
>   "replicationFactor" : 3,
>   "ozoneKeyLocations" : [ {
> "containerID" : 221,
> "localID" : 103176210196201501,
> "length" : 134217728,
> "offset" : 0
>   }, {
> "containerID" : 222,
> "localID" : 103176231767375926,
> "length" : 89478472,
> "offset" : 0
>   } ],
>   "metadata" : { },
>   "fileEncryptionInfo" : null
> }
> When client reads a key, there is "refreshPipeline" option to control whether 
> to get the up-to-date container location infofrom SCM. 
> Currently, this option is always set to true, which makes  saved container 
> location info in OM DB useless. 
> Another motivation is when using Nanda's tool for the OM performance test,  
> with 1000 millions(1Billion) keys, each key with 1 replica, 2 chunk meta 
> data, the total rocks DB directory size is 65.5GB.  One of our customer 
> cluster has the requirement to save 10 Billion objects.  In this case ,the DB 
> size is approximately (65.5GB * 10 * /2 * 3 )~ 1TB. 
> The goal of this task is going to discard the container location info when 
> persist key to OM DB to save the DB space.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3658) Stop to persist container related pipeline info of each key into OM DB to reduce DB size

2020-07-23 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-3658:
-
Fix Version/s: 0.7.0

> Stop to persist container related pipeline info of each key into OM DB to 
> reduce DB size
> 
>
> Key: HDDS-3658
> URL: https://issues.apache.org/jira/browse/HDDS-3658
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> An investigation result of serilized key size, RATIS with three replica.  
> Following examples are quoted from the output of the "ozone sh key info" 
> command which doesn't show related pipeline information for each key location 
> element. 
> 1.  empty key,  serilized size 113 bytes
> hadoop/bucket/user/root/terasort/10G-input-7/_SUCCESS
> {
>   "volumeName" : "hadoop",
>   "bucketName" : "bucket",
>   "name" : "user/root/terasort/10G-input-7/_SUCCESS",
>   "dataSize" : 0,
>   "creationTime" : "2019-11-21T13:53:11.330Z",
>   "modificationTime" : "2019-11-21T13:53:11.361Z",
>   "replicationType" : "RATIS",
>   "replicationFactor" : 3,
>   "ozoneKeyLocations" : [ ],
>   "metadata" : { },
>   "fileEncryptionInfo" : null
> }
> 2.  key with one chunk data, serilized size 661 bytes
> hadoop/bucket/user/root/terasort/10G-input-6/part-m-00037
> {
>   "volumeName" : "hadoop",
>   "bucketName" : "bucket",
>   "name" : "user/root/terasort/10G-input-6/part-m-00037",
>   "dataSize" : 223696200,
>   "creationTime" : "2019-11-18T07:47:58.254Z",
>   "modificationTime" : "2019-11-18T07:53:52.066Z",
>   "replicationType" : "RATIS",
>   "replicationFactor" : 3,
>   "ozoneKeyLocations" : [ {
> "containerID" : 7,
> "localID" : 103157811003588713,
> "length" : 223696200,
> "offset" : 0
>   } ],
>   "metadata" : { },
>   "fileEncryptionInfo" : null
> }
> 3. key with two chunk data, serilized size 1205 bytes,
> ozone sh key info hadoop/bucket/user/root/terasort/10G-input-7/part-m-00027
> {
>   "volumeName" : "hadoop",
>   "bucketName" : "bucket",
>   "name" : "user/root/terasort/10G-input-7/part-m-00027",
>   "dataSize" : 223696200,
>   "creationTime" : "2019-11-21T13:47:07.653Z",
>   "modificationTime" : "2019-11-21T13:53:07.964Z",
>   "replicationType" : "RATIS",
>   "replicationFactor" : 3,
>   "ozoneKeyLocations" : [ {
> "containerID" : 221,
> "localID" : 103176210196201501,
> "length" : 134217728,
> "offset" : 0
>   }, {
> "containerID" : 222,
> "localID" : 103176231767375926,
> "length" : 89478472,
> "offset" : 0
>   } ],
>   "metadata" : { },
>   "fileEncryptionInfo" : null
> }
> When client reads a key, there is "refreshPipeline" option to control whether 
> to get the up-to-date container location infofrom SCM. 
> Currently, this option is always set to true, which makes  saved container 
> location info in OM DB useless. 
> Another motivation is when using Nanda's tool for the OM performance test,  
> with 1000 millions(1Billion) keys, each key with 1 replica, 2 chunk meta 
> data, the total rocks DB directory size is 65.5GB.  One of our customer 
> cluster has the requirement to save 10 Billion objects.  In this case ,the DB 
> size is approximately (65.5GB * 10 * /2 * 3 )~ 1TB. 
> The goal of this task is going to discard the container location info when 
> persist key to OM DB to save the DB space.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1012: HDDS-3658. Stop to persist container related pipeline info of each ke…

2020-07-23 Thread GitBox


ChenSammi commented on pull request #1012:
URL: https://github.com/apache/hadoop-ozone/pull/1012#issuecomment-663318814


   Thanks @elek  and @adoroszlai  for review the code. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi merged pull request #1012: HDDS-3658. Stop to persist container related pipeline info of each ke…

2020-07-23 Thread GitBox


ChenSammi merged pull request #1012:
URL: https://github.com/apache/hadoop-ozone/pull/1012


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc closed pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


captainzmc closed pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4010) S3G startup fails when multiple service ids are configured.

2020-07-23 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-4010:


Assignee: Bharat Viswanadham

> S3G startup fails when multiple service ids are configured.
> ---
>
> Key: HDDS-4010
> URL: https://issues.apache.org/jira/browse/HDDS-4010
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> This Jira is to fix this TODO.
> OzoneServiceProvider.java L59:
> {code:java}
>   // HA cluster.
>   //For now if multiple service id's are configured we throw exception.
>   // As if multiple service id's are configured, S3Gateway will not be
>   // knowing which one to talk to. In future, if OM federation is 
> supported
>   // we can resolve this by having another property like
>   // ozone.om.internal.service.id.
>   // TODO: Revisit this later.
>   if (serviceIdList.size() > 1) {
> throw new IllegalArgumentException("Multiple serviceIds are " +
> "configured. " + Arrays.toString(serviceIdList.toArray()));
> {code}
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4010) S3G startup fails when multiple service ids are configured.

2020-07-23 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4010:
-
Labels: pull-request-available  (was: newbie pull-request-available)

> S3G startup fails when multiple service ids are configured.
> ---
>
> Key: HDDS-4010
> URL: https://issues.apache.org/jira/browse/HDDS-4010
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> This Jira is to fix this TODO.
> OzoneServiceProvider.java L59:
> {code:java}
>   // HA cluster.
>   //For now if multiple service id's are configured we throw exception.
>   // As if multiple service id's are configured, S3Gateway will not be
>   // knowing which one to talk to. In future, if OM federation is 
> supported
>   // we can resolve this by having another property like
>   // ozone.om.internal.service.id.
>   // TODO: Revisit this later.
>   if (serviceIdList.size() > 1) {
> throw new IllegalArgumentException("Multiple serviceIds are " +
> "configured. " + Arrays.toString(serviceIdList.toArray()));
> {code}
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #1252: HDDS-4010. S3G startup fails when multiple service ids are configured.

2020-07-23 Thread GitBox


bharatviswa504 opened a new pull request #1252:
URL: https://github.com/apache/hadoop-ozone/pull/1252


   ## What changes were proposed in this pull request?
   
   This Jira is to fix this TODO.
   
   **OzoneServiceProvider.java L59:
   
 // HA cluster.
 //For now if multiple service id's are configured we throw exception.
 // As if multiple service id's are configured, S3Gateway will not be
 // knowing which one to talk to. In future, if OM federation is 
supported
 // we can resolve this by having another property like
 // ozone.om.internal.service.id.
 // TODO: Revisit this later.
 if (serviceIdList.size() > 1) {
   throw new IllegalArgumentException("Multiple serviceIds are " +
   "configured. " + Arrays.toString(serviceIdList.toArray()));**
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4010
   
   ## How was this patch tested?
   
   HDDS-4008 added test for configuration, this Jira reuses the same.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4010) S3G startup fails when multiple service ids are configured.

2020-07-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4010:
-
Labels: newbie pull-request-available  (was: newbie)

> S3G startup fails when multiple service ids are configured.
> ---
>
> Key: HDDS-4010
> URL: https://issues.apache.org/jira/browse/HDDS-4010
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie, pull-request-available
>
> This Jira is to fix this TODO.
> OzoneServiceProvider.java L59:
> {code:java}
>   // HA cluster.
>   //For now if multiple service id's are configured we throw exception.
>   // As if multiple service id's are configured, S3Gateway will not be
>   // knowing which one to talk to. In future, if OM federation is 
> supported
>   // we can resolve this by having another property like
>   // ozone.om.internal.service.id.
>   // TODO: Revisit this later.
>   if (serviceIdList.size() > 1) {
> throw new IllegalArgumentException("Multiple serviceIds are " +
> "configured. " + Arrays.toString(serviceIdList.toArray()));
> {code}
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on pull request #1250: HDDS-4018. Datanode log spammed by NPE

2020-07-23 Thread GitBox


runzhiwang commented on pull request #1250:
URL: https://github.com/apache/hadoop-ozone/pull/1250#issuecomment-663285841


   LGTM +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4021) Organize Recon DBs into a 'Definition'.

2020-07-23 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-4021:

Summary: Organize Recon DBs into a 'Definition'.  (was: Recon NodeDB should 
be part of the ReconDBDefinition)

> Organize Recon DBs into a 'Definition'.
> ---
>
> Key: HDDS-4021
> URL: https://issues.apache.org/jira/browse/HDDS-4021
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Aravindan Vijayan
>Priority: Major
>
> ReconNodeManager uses node db in an old format which is not part of 
> ReconDBDefinition. Move the definition to ReconDBDefinition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4021) Organize Recon DBs into a 'DBDefinition'.

2020-07-23 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-4021:

Summary: Organize Recon DBs into a 'DBDefinition'.  (was: Organize Recon 
DBs into a 'Definition'.)

> Organize Recon DBs into a 'DBDefinition'.
> -
>
> Key: HDDS-4021
> URL: https://issues.apache.org/jira/browse/HDDS-4021
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Aravindan Vijayan
>Priority: Major
>
> ReconNodeManager uses node db in an old format which is not part of 
> ReconDBDefinition. Move the definition to ReconDBDefinition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4021) Organize Recon DBs into a 'DBDefinition'.

2020-07-23 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-4021:

Description: 
* ReconNodeManager uses node db in an old format which is not part of 
ReconDBDefinition. Move the definition to ReconDBDefinition.
* Create DB Definition for Recon Container DB.
* Modify DBScanner tool to allow it to read Recon DBs. 

  was:ReconNodeManager uses node db in an old format which is not part of 
ReconDBDefinition. Move the definition to ReconDBDefinition.


> Organize Recon DBs into a 'DBDefinition'.
> -
>
> Key: HDDS-4021
> URL: https://issues.apache.org/jira/browse/HDDS-4021
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Aravindan Vijayan
>Priority: Major
>
> * ReconNodeManager uses node db in an old format which is not part of 
> ReconDBDefinition. Move the definition to ReconDBDefinition.
> * Create DB Definition for Recon Container DB.
> * Modify DBScanner tool to allow it to read Recon DBs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #1251: HDDS-4022. Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket.

2020-07-23 Thread GitBox


bharatviswa504 opened a new pull request #1251:
URL: https://github.com/apache/hadoop-ozone/pull/1251


   ## What changes were proposed in this pull request?
   
   Ozone s3 API returns 400 Bad Request for head-bucket for non-existing bucket.
   
   **hrt_qa$ aws s3api --ca-bundle=/usr/local/share/ca-certificates/ca.crt 
--endpoint https://s3g:9879/ head-bucket --bucket fsdghj
   
   An error occurred (400) when calling the HeadBucket operation: Bad Request**
   
   It should return 404 as per AWS documentation:
   https://docs.aws.amazon.com/cli/latest/reference/s3api/head-bucket.html
   
   A client error (404) occurred when calling the HeadBucket operation: Not 
Found
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4022
   
   ## How was this patch tested?
   
   Added test.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4022) Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket

2020-07-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4022:
-
Labels: pull-request-available  (was: )

> Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket
> ---
>
> Key: HDDS-4022
> URL: https://issues.apache.org/jira/browse/HDDS-4022
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Namit Maheshwari
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>
> Ozone s3 API returns 400 Bad Request for head-bucket for non-existing bucket.
> hrt_qa$ aws s3api  --ca-bundle=/usr/local/share/ca-certificates/ca.crt 
> --endpoint https://s3g:9879/  head-bucket --bucket fsdghj
> An error occurred (400) when calling the HeadBucket operation: Bad Request
> It should return 404 as per AWS documentation:
> https://docs.aws.amazon.com/cli/latest/reference/s3api/head-bucket.html
> A client error (404) occurred when calling the HeadBucket operation: Not 
> Found 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4022) Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket

2020-07-23 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4022:
-
Labels:   (was: S3)

> Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket
> ---
>
> Key: HDDS-4022
> URL: https://issues.apache.org/jira/browse/HDDS-4022
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Ozone s3 API returns 400 Bad Request for head-bucket for non-existing bucket.
> hrt_qa$ aws s3api  --ca-bundle=/usr/local/share/ca-certificates/ca.crt 
> --endpoint https://s3g:9879/  head-bucket --bucket fsdghj
> An error occurred (400) when calling the HeadBucket operation: Bad Request
> It should return 404 as per AWS documentation:
> https://docs.aws.amazon.com/cli/latest/reference/s3api/head-bucket.html
> A client error (404) occurred when calling the HeadBucket operation: Not 
> Found 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4022) Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket

2020-07-23 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4022:


 Summary: Ozone s3 API return 400 Bad Request for head-bucket for 
non existing bucket
 Key: HDDS-4022
 URL: https://issues.apache.org/jira/browse/HDDS-4022
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Ozone s3 API returns 400 Bad Request for head-bucket for non-existing bucket.

hrt_qa$ aws s3api  --ca-bundle=/usr/local/share/ca-certificates/ca.crt 
--endpoint https://s3g:9879/  head-bucket --bucket fsdghj

An error occurred (400) when calling the HeadBucket operation: Bad Request

It should return 404 as per AWS documentation:
https://docs.aws.amazon.com/cli/latest/reference/s3api/head-bucket.html

A client error (404) occurred when calling the HeadBucket operation: Not Found 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4022) Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket

2020-07-23 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4022:
-
Reporter: Namit Maheshwari  (was: Bharat Viswanadham)

> Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket
> ---
>
> Key: HDDS-4022
> URL: https://issues.apache.org/jira/browse/HDDS-4022
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Namit Maheshwari
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> Ozone s3 API returns 400 Bad Request for head-bucket for non-existing bucket.
> hrt_qa$ aws s3api  --ca-bundle=/usr/local/share/ca-certificates/ca.crt 
> --endpoint https://s3g:9879/  head-bucket --bucket fsdghj
> An error occurred (400) when calling the HeadBucket operation: Bad Request
> It should return 404 as per AWS documentation:
> https://docs.aws.amazon.com/cli/latest/reference/s3api/head-bucket.html
> A client error (404) occurred when calling the HeadBucket operation: Not 
> Found 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4022) Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket

2020-07-23 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4022:
-
Component/s: S3

> Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket
> ---
>
> Key: HDDS-4022
> URL: https://issues.apache.org/jira/browse/HDDS-4022
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: S3
>
> Ozone s3 API returns 400 Bad Request for head-bucket for non-existing bucket.
> hrt_qa$ aws s3api  --ca-bundle=/usr/local/share/ca-certificates/ca.crt 
> --endpoint https://s3g:9879/  head-bucket --bucket fsdghj
> An error occurred (400) when calling the HeadBucket operation: Bad Request
> It should return 404 as per AWS documentation:
> https://docs.aws.amazon.com/cli/latest/reference/s3api/head-bucket.html
> A client error (404) occurred when calling the HeadBucket operation: Not 
> Found 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4022) Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket

2020-07-23 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4022:
-
Target Version/s: 0.6.0
Priority: Blocker  (was: Major)

> Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket
> ---
>
> Key: HDDS-4022
> URL: https://issues.apache.org/jira/browse/HDDS-4022
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> Ozone s3 API returns 400 Bad Request for head-bucket for non-existing bucket.
> hrt_qa$ aws s3api  --ca-bundle=/usr/local/share/ca-certificates/ca.crt 
> --endpoint https://s3g:9879/  head-bucket --bucket fsdghj
> An error occurred (400) when calling the HeadBucket operation: Bad Request
> It should return 404 as per AWS documentation:
> https://docs.aws.amazon.com/cli/latest/reference/s3api/head-bucket.html
> A client error (404) occurred when calling the HeadBucket operation: Not 
> Found 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4022) Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket

2020-07-23 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4022:
-
Labels: S3  (was: )

> Ozone s3 API return 400 Bad Request for head-bucket for non existing bucket
> ---
>
> Key: HDDS-4022
> URL: https://issues.apache.org/jira/browse/HDDS-4022
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: S3
>
> Ozone s3 API returns 400 Bad Request for head-bucket for non-existing bucket.
> hrt_qa$ aws s3api  --ca-bundle=/usr/local/share/ca-certificates/ca.crt 
> --endpoint https://s3g:9879/  head-bucket --bucket fsdghj
> An error occurred (400) when calling the HeadBucket operation: Bad Request
> It should return 404 as per AWS documentation:
> https://docs.aws.amazon.com/cli/latest/reference/s3api/head-bucket.html
> A client error (404) occurred when calling the HeadBucket operation: Not 
> Found 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4021) Recon NodeDB should be part of the ReconDBDefinition

2020-07-23 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-4021:


 Summary: Recon NodeDB should be part of the ReconDBDefinition
 Key: HDDS-4021
 URL: https://issues.apache.org/jira/browse/HDDS-4021
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Recon
Affects Versions: 0.5.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Aravindan Vijayan


ReconNodeManager uses node db in an old format which is not part of 
ReconDBDefinition. Move the definition to ReconDBDefinition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4008) Recon should fallback to ozone.om.service.ids when the internal service id is not defined.

2020-07-23 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan resolved HDDS-4008.
-
Resolution: Fixed

Merged the PR.

> Recon should fallback to ozone.om.service.ids when the internal service id is 
> not defined.
> --
>
> Key: HDDS-4008
> URL: https://issues.apache.org/jira/browse/HDDS-4008
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Blocker
>  Labels: pull-request-available
>
> Recon connects to OM via RPC using the "ozone.om.internal.service.id" to get 
> updates. If the above config is not defined, but the ozone.om.service.ids is 
> defined, Recon should use the latter as a fallback. Currently, a single Recon 
> instance supports only 1 OM HA cluster at a time. Hence, if multiple ids are 
> defined, Recon will pick the first.
> Thanks to [~vivekratnavel] for reporting the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx merged pull request #1243: HDDS-4008. Recon should fallback to ozone.om.service.ids when the internal service id is not defined.

2020-07-23 Thread GitBox


avijayanhwx merged pull request #1243:
URL: https://github.com/apache/hadoop-ozone/pull/1243


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #1243: HDDS-4008. Recon should fallback to ozone.om.service.ids when the internal service id is not defined.

2020-07-23 Thread GitBox


avijayanhwx commented on pull request #1243:
URL: https://github.com/apache/hadoop-ozone/pull/1243#issuecomment-663225883


   Thank you for the reviews @bharatviswa504 & @vivekratnavel. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4009) Recon Overview page: The volume, bucket and key counts are not accurate

2020-07-23 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-4009:

Target Version/s: 0.7.0  (was: 0.6.0)

> Recon Overview page: The volume, bucket and key counts are not accurate
> ---
>
> Key: HDDS-4009
> URL: https://issues.apache.org/jira/browse/HDDS-4009
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> The counts shown in the overview page are not accurate due to the usage of 
> "rocksdb.estimate-num-keys" to get the counts. Instead, keep track of 
> accurate counts by updating the counter in a global table every time an event 
> is triggered via FileSizeCount Task in Recon.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1243: HDDS-4008. Recon should fallback to ozone.om.service.ids when the internal service id is not defined.

2020-07-23 Thread GitBox


bharatviswa504 commented on a change in pull request #1243:
URL: https://github.com/apache/hadoop-ozone/pull/1243#discussion_r459660177



##
File path: 
hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/src/views/overview/overview.tsx
##
@@ -180,7 +180,7 @@ export class Overview extends 
React.Component, IOverviewS
 
   
   
-
+

Review comment:
   That is fine. But a new Jira should be nicer here.
   I am fine with it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 edited a comment on pull request #1244: HDDS-3999. OM Shutdown when Commit part tries to commit the part, after abort upload.

2020-07-23 Thread GitBox


bharatviswa504 edited a comment on pull request #1244:
URL: https://github.com/apache/hadoop-ozone/pull/1244#issuecomment-663149010


   >As the parameter omMultipartKeyInfo of S3MultipartUploadCommitPartResponse 
is annotation to Nullable, so, we should >consider possible NPE, for this, i 
have two suggestion
   >Mark omMultipartKeyInfo Nonnull, and keep the callee never give a null to 
it.
   
   When NO_SUCH_MULTIPART_UPLOAD_ERROR omMultipartKeyInfo will be null. So, in 
this case we should not use omMultipartKeyInfo. The previous code has bug that 
caused this issue.
   
   >Also give a null pointer check for omMultipartKeyInfo int 
S3MultipartUploadCommitPartResponse#addToDBBatch, to avoid >NPE.
   
   And when Status is OK, omMultipartKeyInfo will not be null, so we don't need 
an additional null check here. We have already checked the Status, and we 
access omMultipartKeyInfo only when the status is OK. I don't see any 
possibility when the status is OK, omMultipartKeyInfo to be null. Let me know 
if you still see any possibility. We can add a null check, but trying to 
understand here if there is any missed case, as already status OK check guarded 
it.
   
   Just want to explain why there is a check for oldPartKeyInfo, when the 
status is OK, there is a chance of null when this part is not an override.
   
   Tagged omMultipartKeyInfo as nullable because when error is 
NO_SUCH_MULTIPART_UPLOAD_ERROR, the value will be null.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 edited a comment on pull request #1244: HDDS-3999. OM Shutdown when Commit part tries to commit the part, after abort upload.

2020-07-23 Thread GitBox


bharatviswa504 edited a comment on pull request #1244:
URL: https://github.com/apache/hadoop-ozone/pull/1244#issuecomment-663149010


   >As the parameter omMultipartKeyInfo of S3MultipartUploadCommitPartResponse 
is annotation to Nullable, so, we should >consider possible NPE, for this, i 
have two suggestion
   >Mark omMultipartKeyInfo Nonnull, and keep the callee never give a null to 
it.
   
   When NO_SUCH_MULTIPART_UPLOAD_ERROR omMultipartKeyInfo will be null. So, in 
this case we should not use omMultipartKeyInfo. The previous code has bug that 
caused this issue.
   
   >Also give a null pointer check for omMultipartKeyInfo int 
S3MultipartUploadCommitPartResponse#addToDBBatch, to avoid >NPE.
   And when Status is OK, omMultipartKeyInfo will not be null, so we don't need 
an additional null check here. We have already checked the Status, and we 
access omMultipartKeyInfo only when the status is OK. I don't see any 
possibility when the status is OK, omMultipartKeyInfo to be null. Let me know 
if you still see any possibility. We can add a null check, but trying to 
understand here if there is any missed case, as already status OK check guarded 
it.
   
   Just want to explain why there is a check for oldPartKeyInfo, when the 
status is OK, there is a chance of null when this part is not an override.
   
   Tagged omMultipartKeyInfo as nullable because when error is 
NO_SUCH_MULTIPART_UPLOAD_ERROR, the value will be null.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 edited a comment on pull request #1244: HDDS-3999. OM Shutdown when Commit part tries to commit the part, after abort upload.

2020-07-23 Thread GitBox


bharatviswa504 edited a comment on pull request #1244:
URL: https://github.com/apache/hadoop-ozone/pull/1244#issuecomment-663150794


   >BTW, i don't think i have a abortMPU operation by executed the following 
command
   
   >$ zcat logs/om-audit-host-*log.gz | grep "COMPLETE_MULTIPART_UPLOAD" | grep 
-v "multipartList"
   >$ zcat logs/om-audit-host-*log.gz | grep "COMPLETE_MULTIPART_UPLOAD" | wc 
-l 
   >188
   
   Might be this is one of the scenarios to catch this error.
   On thinking more this can also happen when the commit part tries to commit a 
part, after complete multipart upload.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #1244: HDDS-3999. OM Shutdown when Commit part tries to commit the part, after abort upload.

2020-07-23 Thread GitBox


bharatviswa504 commented on pull request #1244:
URL: https://github.com/apache/hadoop-ozone/pull/1244#issuecomment-663150794


   >BTW, i don't think i have a abortMPU operation by executed the following 
command
   
   >$ zcat logs/om-audit-host-*log.gz | grep "COMPLETE_MULTIPART_UPLOAD" | grep 
-v "multipartList"
   >$ zcat logs/om-audit-host-*log.gz | grep "COMPLETE_MULTIPART_UPLOAD" | wc 
-l 
   >188
   
   Might be this is one of the scenario to catch this error.
   On thinking more this can happen when the commit part tries to commit a 
part, after complete multipart upload.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #1244: HDDS-3999. OM Shutdown when Commit part tries to commit the part, after abort upload.

2020-07-23 Thread GitBox


bharatviswa504 commented on pull request #1244:
URL: https://github.com/apache/hadoop-ozone/pull/1244#issuecomment-663149010


   >As the parameter omMultipartKeyInfo of S3MultipartUploadCommitPartResponse 
is annotation to Nullable, so, we should >consider possible NPE, for this, i 
have two suggestion
   >Mark omMultipartKeyInfo Nonnull, and keep the callee never give a null to 
it.
   
   When NO_SUCH_MULTIPART_UPLOAD_ERROR omMultipartKeyInfo will be null. So, in 
this case we should not use omMultipartKeyInfo. The previous code has bug that 
caused this issue.
   
   >Also give a null pointer check for omMultipartKeyInfo int 
S3MultipartUploadCommitPartResponse#addToDBBatch, to avoid >NPE.
   And when Status is OK, omMultipartKeyInfo will not be null, so we don't need 
an additional null check here. We have already checked the Status, and we 
access omMultipartKeyInfo only when the status is OK. I don't see any 
possibility when the status is OK, omMultipartKeyInfo to be null. Let me know 
if you still see any possibility. We can add a null check, but trying to 
understand here if there is any missed case.
   
   Tagged omMultipartKeyInfo as nullable because when error is 
NO_SUCH_MULTIPART_UPLOAD_ERROR, the value will be null.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 edited a comment on pull request #1244: HDDS-3999. OM Shutdown when Commit part tries to commit the part, after abort upload.

2020-07-23 Thread GitBox


bharatviswa504 edited a comment on pull request #1244:
URL: https://github.com/apache/hadoop-ozone/pull/1244#issuecomment-663149010


   >As the parameter omMultipartKeyInfo of S3MultipartUploadCommitPartResponse 
is annotation to Nullable, so, we should >consider possible NPE, for this, i 
have two suggestion
   >Mark omMultipartKeyInfo Nonnull, and keep the callee never give a null to 
it.
   
   When NO_SUCH_MULTIPART_UPLOAD_ERROR omMultipartKeyInfo will be null. So, in 
this case we should not use omMultipartKeyInfo. The previous code has bug that 
caused this issue.
   
   >Also give a null pointer check for omMultipartKeyInfo int 
S3MultipartUploadCommitPartResponse#addToDBBatch, to avoid >NPE.
   And when Status is OK, omMultipartKeyInfo will not be null, so we don't need 
an additional null check here. We have already checked the Status, and we 
access omMultipartKeyInfo only when the status is OK. I don't see any 
possibility when the status is OK, omMultipartKeyInfo to be null. Let me know 
if you still see any possibility. We can add a null check, but trying to 
understand here if there is any missed case, as already status OK check guarded 
it.
   
   Tagged omMultipartKeyInfo as nullable because when error is 
NO_SUCH_MULTIPART_UPLOAD_ERROR, the value will be null.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #1243: HDDS-4008. Recon should fallback to ozone.om.service.ids when the internal service id is not defined.

2020-07-23 Thread GitBox


avijayanhwx commented on a change in pull request #1243:
URL: https://github.com/apache/hadoop-ozone/pull/1243#discussion_r459622263



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
##
@@ -527,4 +529,45 @@ public static void validateKeyName(String keyName)
   OMException.ResultCodes.INVALID_KEY_NAME);
 }
   }
+
+  /**
+   * Return configured OzoneManager service id based on the following logic.
+   * Look at 'ozone.om.internal.service.id' first. If configured, return that.
+   * If the above is not configured, look at 'ozone.om.service.ids'.
+   * If count(ozone.om.service.ids) == 1, return that id.
+   * If count(ozone.om.service.ids) > 1 throw exception
+   * If 'ozone.om.service.ids' is not configured, return null. (Non HA)
+   * @param conf configuration
+   * @return OM service ID.
+   * @throws IOException on error.
+   */
+  public static String getOzoneManagerServiceId(OzoneConfiguration conf)
+  throws IOException {
+Collection omServiceIds;
+String localOMServiceId = conf.get(OZONE_OM_INTERNAL_SERVICE_ID);
+if (localOMServiceId == null) {
+  LOG.info("{} is not defined, falling back to {} to find serviceID for "
+  + "OzoneManager if it is HA enabled cluster",
+  OZONE_OM_INTERNAL_SERVICE_ID, OZONE_OM_SERVICE_IDS_KEY);
+  omServiceIds = conf.getTrimmedStringCollection(
+  OZONE_OM_SERVICE_IDS_KEY);
+  if (omServiceIds.size() > 1) {
+throw new IOException(String.format(
+"More than 1 OzoneManager ServiceID (ozone.om.service.ids) " +
+"configured : %s, but ozone.om.internal.service.id is not " +
+"configured.", omServiceIds.toString()));
+  }
+} else {
+  omServiceIds = Collections.singletonList(localOMServiceId);
+}
+

Review comment:
   Thanks @bharatviswa504. Will fix this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4020) ACL commands like getacl and setacl should return a response only when Native Authorizer is enabled

2020-07-23 Thread Istvan Fajth (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17163785#comment-17163785
 ] 

Istvan Fajth commented on HDDS-4020:


I would like to suggest a few things for consideration on this.

If we have an external authorizer, like Ranger, then we should fail any ACL 
creation or modification commands, with a proper error message that says 
modification of any ACL should happen via the external authorizer used.
On the other hand read operations should not fail.
Now we get this error message on a getACL when external authorizer is enabled:
{{[# ozone sh volume getacl o3://ozone1/test}}
{{PERMISSION_DENIED User u...@example.com doesn't have READ_ACL permission to 
access volume}}

I think, reading the ACLs from the external authorizer, and showing it to the 
users would be a way more nicer approach, though I agree this should probably 
go into a separate JIRA as this might need modifications in the 
IAccessAuthorizer that has to be followed up by the Ranger plugin itself as 
well.

> ACL commands like getacl and setacl should return a response only when Native 
> Authorizer is enabled
> ---
>
> Key: HDDS-4020
> URL: https://issues.apache.org/jira/browse/HDDS-4020
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone CLI, Ozone Manager
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Currently, the getacl and setacl commands return wrong information when an 
> external authorizer such as Ranger is enabled. There should be a check to 
> verify if Native Authorizer is enabled before returning any response for 
> these two commands.
> If an external authorizer is enabled, it should show a nice message about 
> managing acls in external authorizer.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3994) Write object when met exception can be slower than before

2020-07-23 Thread maobaolong (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17163765#comment-17163765
 ] 

maobaolong commented on HDDS-3994:
--

[~ljain] Thanks for your clarification, after a quick overlook at the the new 
retry policy, i believe it could be a better retry policy, as you said, we need 
some more time to tune the default values, but now, it bring us performance 
impact after we upgrade our s3g.
We spent a whole day to find out the original retry policy has been replaced by 
the newer one, so that our configure key related the  RetryLimitedPolicy cannot 
affected. I mean, although the newer retry policy could be better, but we 
should construct a retry policy configurable for user, and default to the 
original one, meanwhile, we can claim that we invent a new better retry policy, 
and the user like us can do some testing and tuning, after that, we can be 
brave to change the retry policy to the newer one by modify the ozone-site.xml, 
even though, we have to change the retry policy for some node and monitor the 
performance on the production env for some day, then, we can change all other 
nodes.

So, i think a configurable framework for the retry policy is necessary, and 
default to the original RetryLimitedPolicy is also important, new policy can be 
acceptable should after a studying, testing and tuning to the new policy, it 
need a process, we will continue to tuning the new policy later. Now we need to 
restore the retry logic and solve the low performance caused by the lack of 
familiarity with new retry policy.

Please take a look at my PR, thanks, related, 
https://github.com/apache/hadoop-ozone/pull/1231

> Write object when met exception can be slower than before
> -
>
> Key: HDDS-3994
> URL: https://issues.apache.org/jira/browse/HDDS-3994
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
>
> After HDDS-3350 , the retry policy changed, and the client write performance 
> getting lower than before.
>  
> With HDDS-3350, I restore the method RatisHelper#createRetryPolicy to the 
> previous commit, it works well.
>  
> The previous is 
>  
> {code:java}
> static RetryPolicy createRetryPolicy(ConfigurationSource conf) {
> int maxRetryCount =
> conf.getInt(OzoneConfigKeys.DFS_RATIS_CLIENT_REQUEST_MAX_RETRIES_KEY,
> OzoneConfigKeys.
> DFS_RATIS_CLIENT_REQUEST_MAX_RETRIES_DEFAULT);
> long retryInterval = conf.getTimeDuration(OzoneConfigKeys.
> DFS_RATIS_CLIENT_REQUEST_RETRY_INTERVAL_KEY, OzoneConfigKeys.
> DFS_RATIS_CLIENT_REQUEST_RETRY_INTERVAL_DEFAULT
> .toIntExact(TimeUnit.MILLISECONDS), TimeUnit.MILLISECONDS);
> TimeDuration sleepDuration =
> TimeDuration.valueOf(retryInterval, TimeUnit.MILLISECONDS);
> RetryPolicy retryPolicy = RetryPolicies
> .retryUpToMaximumCountWithFixedSleep(maxRetryCount, sleepDuration);
> return retryPolicy;
>   }
> {code}
> When I switch logLevel to TRACE level, i see the following log While using 
> HDDS-3350
> 2020-07-21 12:56:11,822 [grpc-default-executor-5] TRACE impl.OrderedAsync: 
> client-6F623ADF656D: Failed* 
> RaftClientRequest:client-6F623ADF656D->207b98d9-ad64-45a8-940f-504b514feff5@group-83A28012848F,
>  cid=2876, seq=1*, Watch(0), null
> java.util.concurrent.CompletionException: 
> org.apache.ratis.protocol.LeaderNotReadyException: 
> 207b98d9-ad64-45a8-940f-504b514feff5@group-83A28012848F is in LEADER state 
> but not ready yet.
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers.completeReplyExceptionally(GrpcClientProtocolClient.java:358)
> at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers.access$000(GrpcClientProtocolClient.java:264)
> at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers$1.onNext(GrpcClientProtocolClient.java:283)
> at 
> 

[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #1243: HDDS-4008. Recon should fallback to ozone.om.service.ids when the internal service id is not defined.

2020-07-23 Thread GitBox


avijayanhwx commented on a change in pull request #1243:
URL: https://github.com/apache/hadoop-ozone/pull/1243#discussion_r459591303



##
File path: 
hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/src/views/overview/overview.tsx
##
@@ -180,7 +180,7 @@ export class Overview extends 
React.Component, IOverviewS
 
   
   
-
+

Review comment:
   I made this trivial change to cover for HDDS-4009 until that is fixed. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4006) Disallow MPU on encrypted buckets.

2020-07-23 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-4006.
-
Fix Version/s: 0.6.0
   Resolution: Fixed

> Disallow MPU on encrypted buckets.
> --
>
> Key: HDDS-4006
> URL: https://issues.apache.org/jira/browse/HDDS-4006
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>
> With HDDS-3612 buckets created via ozone are also accessible via S3.
> This has caused a problem when the bucket is encrypted, the keys are not 
> encrypted on disk.
> *2 Issues:*
> 1. On OM, for each part a new encryption info is generated. During complete 
> Multipart upload, the encryption info is not stored in KeyInfo.
> 2. On the client, for part upload, the encryption info is silently ignored.
> If we don't throw an error, on an encrypted bucket, key data is not encrypted 
> on disks.
> For 0.6.0 release, we can mark this as not supported, and this will be fixed 
> in next release by HDDS-4005



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 merged pull request #1241: HDDS-4006. Disallow MPU on encrypted buckets.

2020-07-23 Thread GitBox


arp7 merged pull request #1241:
URL: https://github.com/apache/hadoop-ozone/pull/1241


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4020) ACL commands like getacl and setacl should return a response only when Native Authorizer is enabled

2020-07-23 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-4020:


 Summary: ACL commands like getacl and setacl should return a 
response only when Native Authorizer is enabled
 Key: HDDS-4020
 URL: https://issues.apache.org/jira/browse/HDDS-4020
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone CLI, Ozone Manager
Affects Versions: 0.5.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Bharat Viswanadham


Currently, the getacl and setacl commands return wrong information when an 
external authorizer such as Ranger is enabled. There should be a check to 
verify if Native Authorizer is enabled before returning any response for these 
two commands.

If an external authorizer is enabled, it should show a nice message about 
managing acls in external authorizer.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #1096: HDDS-3833. Use Pipeline choose policy to choose pipeline from exist pipeline list

2020-07-23 Thread GitBox


avijayanhwx commented on a change in pull request #1096:
URL: https://github.com/apache/hadoop-ozone/pull/1096#discussion_r459577998



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/PipelineChoosePolicy.java
##
@@ -0,0 +1,36 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.hdds.scm;
+
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+
+import java.util.List;
+
+/**
+ * A {@link PipelineChoosePolicy} support choosing pipeline from exist list.
+ */
+public interface PipelineChoosePolicy {
+
+  /**
+   * Given an initial list of pipelines, return one of the pipelines.
+   *
+   * @param pipelineList list of pipelines.
+   * @return one of the pipelines.
+   */
+  Pipeline choosePipeline(List pipelineList);

Review comment:
   @maobaolong For my knowledge, can you list a few pipeline selection 
policies that you envision? I am wondering if we are OK with just a 
List\ as an argument.

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/choose/algorithms/PipelineChoosePolicyFactory.java
##
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.pipeline.choose.algorithms;
+
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import org.apache.hadoop.hdds.scm.PipelineChoosePolicy;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.lang.reflect.Constructor;
+
+/**
+ * A factory to create pipeline choose policy instance based on configuration
+ * property {@link ScmConfigKeys#OZONE_SCM_PIPELINE_CHOOSE_IMPL_KEY}.
+ */
+public final class PipelineChoosePolicyFactory {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(PipelineChoosePolicyFactory.class);
+
+  private static final Class
+  OZONE_SCM_PIPELINE_CHOOSE_IMPL_DEFAULT =
+  RandomPipelineChoosePolicy.class;
+
+  private PipelineChoosePolicyFactory() {
+  }
+
+  public static PipelineChoosePolicy getPolicy(
+  ConfigurationSource conf) throws SCMException {
+final Class policyClass = conf
+.getClass(ScmConfigKeys.OZONE_SCM_PIPELINE_CHOOSE_IMPL_KEY,
+OZONE_SCM_PIPELINE_CHOOSE_IMPL_DEFAULT,
+PipelineChoosePolicy.class);
+Constructor constructor;
+try {
+  constructor = policyClass.getDeclaredConstructor();

Review comment:
   We are relying on default constructor here. And we pass in only 
List\ through the API. We may have policies that need more 
information like the topology, client address etc as well. Can we make sure we 
support them without change of interface later? 

##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
##
@@ -287,6 +287,9 @@
   public static final String OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT =
   "ozone.scm.pipeline.owner.container.count";
   public static final int OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT_DEFAULT = 3;
+  // Pipeline choose policy:
+  public static final String OZONE_SCM_PIPELINE_CHOOSE_IMPL_KEY =
+  "ozone.scm.pipeline.choose.impl";

Review comment:
   nit. Suggest rename ozone.scm.pipeline.choose.impl --> 
ozone.scm.pipeline.choose.policy.impl

##
File path: 

[jira] [Updated] (HDDS-4017) Acceptance check may run against wrong commit

2020-07-23 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4017:
---
Status: Patch Available  (was: Open)

> Acceptance check may run against wrong commit
> -
>
> Key: HDDS-4017
> URL: https://issues.apache.org/jira/browse/HDDS-4017
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> For push builds, acceptance check may build and test a different commit than 
> the one that was pushed.
> The check for 
> [HDDS-3991|https://github.com/apache/hadoop-ozone/commit/404ec6d0725cfe9c80aa912f150c6474037b10bb]
>  built 
> [HDDS-3933|https://github.com/apache/hadoop-ozone/commit/ff7b5a3367eccc0969bfd92a2cafe48899a2aaa5]:
> {code:title=https://github.com/apache/hadoop-ozone/runs/898449998#step:4:30}
> HEAD is now at ff7b5a336 HDDS-3933. Fix memory leak because of too many 
> Datanode State Machine Thread (#1185)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4018) Datanode log spammed by NPE

2020-07-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4018:
-
Labels: pull-request-available  (was: )

> Datanode log spammed by NPE
> ---
>
> Key: HDDS-4018
> URL: https://issues.apache.org/jira/browse/HDDS-4018
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.6.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>  Labels: pull-request-available
>
> {code}
> datanode_1  | 2020-07-22 13:11:47,845 [Datanode State Machine Thread - 0] 
> WARN statemachine.StateContext: No available thread in pool for past 2 
> seconds.
> datanode_1  | 2020-07-22 13:11:47,846 [Datanode State Machine Thread - 0] 
> ERROR statemachine.DatanodeStateMachine: Unable to finish the execution.
> datanode_1  | java.lang.NullPointerException
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
> datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
> datanode_1  | 2020-07-22 13:11:47,847 [Datanode State Machine Thread - 0] 
> ERROR statemachine.DatanodeStateMachine: Unable to finish the execution.
> datanode_1  | java.lang.NullPointerException
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
> datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
> datanode_1  | 2020-07-22 13:11:47,848 [Datanode State Machine Thread - 0] 
> ERROR statemachine.DatanodeStateMachine: Unable to finish the execution.
> datanode_1  | java.lang.NullPointerException
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
> datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
> datanode_1  | 2020-07-22 13:11:47,848 [Datanode State Machine Thread - 0] 
> ERROR statemachine.DatanodeStateMachine: Unable to finish the execution.
> datanode_1  | java.lang.NullPointerException
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
> datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
> datanode_1  | 2020-07-22 13:11:47,851 [Datanode State Machine Thread - 0] 
> ERROR statemachine.DatanodeStateMachine: Unable to finish the execution.
> datanode_1  | java.lang.NullPointerException
> datanode_1  |   at 
> 

[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #1250: HDDS-4018. Datanode log spammed by NPE

2020-07-23 Thread GitBox


adoroszlai opened a new pull request #1250:
URL: https://github.com/apache/hadoop-ozone/pull/1250


   ## What changes were proposed in this pull request?
   
   Avoid calling `task.await()` without first calling `task.execute()`.  The 
former uses variable initialized in the latter.
   
   https://issues.apache.org/jira/browse/HDDS-4018
   
   ## How was this patch tested?
   
   Logs from acceptance test are normal size:
   https://github.com/adoroszlai/hadoop-ozone/runs/902827380
   
   Without the patch they grow to 800-1000MB:
   https://github.com/apache/hadoop-ozone/suites/950149055/artifacts/11861873



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4017) Acceptance check may run against wrong commit

2020-07-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4017:
-
Labels: pull-request-available  (was: )

> Acceptance check may run against wrong commit
> -
>
> Key: HDDS-4017
> URL: https://issues.apache.org/jira/browse/HDDS-4017
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> For push builds, acceptance check may build and test a different commit than 
> the one that was pushed.
> The check for 
> [HDDS-3991|https://github.com/apache/hadoop-ozone/commit/404ec6d0725cfe9c80aa912f150c6474037b10bb]
>  built 
> [HDDS-3933|https://github.com/apache/hadoop-ozone/commit/ff7b5a3367eccc0969bfd92a2cafe48899a2aaa5]:
> {code:title=https://github.com/apache/hadoop-ozone/runs/898449998#step:4:30}
> HEAD is now at ff7b5a336 HDDS-3933. Fix memory leak because of too many 
> Datanode State Machine Thread (#1185)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #1249: HDDS-4017. Acceptance check may run against wrong commit

2020-07-23 Thread GitBox


adoroszlai opened a new pull request #1249:
URL: https://github.com/apache/hadoop-ozone/pull/1249


   ## What changes were proposed in this pull request?
   
   Use commit SHA to checkout a specific commit on `push` and `schedule` 
events.  Keep using name for `pull_request` events (eg. 
`refs/pull/1234/merge`), because merge commit may not be available by SHA.
   
   Use `github` context instead of environment variables.  Github substitutes 
these values before running the command, so we can see them in the log.  
Comparison:
   
   
   ```
   git clone https://github.com/${GITHUB_REPOSITORY}.git /mnt/ozone
   cd /mnt/ozone
   git fetch origin "${GITHUB_REF}"
   git checkout FETCH_HEAD
   ```
   
   vs.
   
   ```
   git clone https://github.com/adoroszlai/hadoop-ozone.git /mnt/ozone
   cd /mnt/ozone
   git fetch origin "refs/heads/HDDS-4017"
   git checkout FETCH_HEAD
   ```
   
   https://issues.apache.org/jira/browse/HDDS-4017
   
   ## How was this patch tested?
   
   `push` event:
   https://github.com/adoroszlai/hadoop-ozone/runs/903007668#step:4:8
   
   `pull_request` event is being tested here.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4019) Show the storageDir while need init om or scm

2020-07-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4019:
-
Labels: pull-request-available  (was: )

> Show the storageDir while need init om or scm
> -
>
> Key: HDDS-4019
> URL: https://issues.apache.org/jira/browse/HDDS-4019
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager, SCM
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>
> When you accident to use a wrong ozone-site.xml or wrong configure key 
> ozone.metadata.dirs to a new dir, the scm or om cannot start and shown that 
> 'ozone om --init' or 'ozone scm --init' is needed, but you don't know the 
> root cause is you are using a wrong config file or gave a wrong value of 
> ozone.metadata.dirs. 
> So we can show the current storageDir to user, so that user can aware the 
> real problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong opened a new pull request #1248: HDDS-4019. Show the storageDir while need init om or scm

2020-07-23 Thread GitBox


maobaolong opened a new pull request #1248:
URL: https://github.com/apache/hadoop-ozone/pull/1248


   ## What changes were proposed in this pull request?
   
   When you accident to use a wrong ozone-site.xml or wrong configure key 
ozone.metadata.dirs to a new dir, the scm or om cannot start and shown that 
'ozone om --init' or 'ozone scm --init' is needed, but you don't know the root 
cause is you are using a wrong config file or gave a wrong value of 
ozone.metadata.dirs.
   
   So we can show the current storageDir to user, so that user can aware the 
real problem.
   
   ## What is the link to the Apache JIRA
   
   HDDS-4019
   
   ## How was this patch tested?
   execute the following command, see the output
   
   ozone scm
   ozone om
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


captainzmc commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459468457



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1706,21 +1706,24 @@ public boolean setOwner(String volume, String owner) 
throws IOException {
* Changes the Quota on a volume.
*
* @param volume - Name of the volume.
-   * @param quota  - Quota in bytes.
+   * @param storagespceQuota - Quota in bytes.
+   * @param namespaceQuota - Quota in counts.
* @throws IOException
*/
   @Override
-  public void setQuota(String volume, long quota) throws IOException {
-if (isAclEnabled) {
+  public void setQuota(String volume, long namespaceQuota,

Review comment:
   Similar to 
[setOwner](https://github.com/apache/hadoop-ozone/blob/7dac140024214c2189b72fad0566a0252d63e93c/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java#L1697),
 there already exists 
[setQuota](https://github.com/apache/hadoop-ozone/blob/7dac140024214c2189b72fad0566a0252d63e93c/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java#L1727).
 If none of this is needed, we can use separate PR to clean up this useless 
methods.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


captainzmc commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459468457



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1706,21 +1706,24 @@ public boolean setOwner(String volume, String owner) 
throws IOException {
* Changes the Quota on a volume.
*
* @param volume - Name of the volume.
-   * @param quota  - Quota in bytes.
+   * @param storagespceQuota - Quota in bytes.
+   * @param namespaceQuota - Quota in counts.
* @throws IOException
*/
   @Override
-  public void setQuota(String volume, long quota) throws IOException {
-if (isAclEnabled) {
+  public void setQuota(String volume, long namespaceQuota,

Review comment:
   Similar to 
[setOwner](https://github.com/apache/hadoop-ozone/blob/7dac140024214c2189b72fad0566a0252d63e93c/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java#L1697),
 there already exists 
[setQuota](https://github.com/apache/hadoop-ozone/blob/7dac140024214c2189b72fad0566a0252d63e93c/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java#L1727).
 If none of this is needed, we can use separate PR to clean up the useless 
methods.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


captainzmc commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459468457



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1706,21 +1706,24 @@ public boolean setOwner(String volume, String owner) 
throws IOException {
* Changes the Quota on a volume.
*
* @param volume - Name of the volume.
-   * @param quota  - Quota in bytes.
+   * @param storagespceQuota - Quota in bytes.
+   * @param namespaceQuota - Quota in counts.
* @throws IOException
*/
   @Override
-  public void setQuota(String volume, long quota) throws IOException {
-if (isAclEnabled) {
+  public void setQuota(String volume, long namespaceQuota,

Review comment:
   Similar to 
[setOwner](https://github.com/apache/hadoop-ozone/blob/7dac140024214c2189b72fad0566a0252d63e93c/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java#L1697),
 there already exists 
[setQuota](https://github.com/apache/hadoop-ozone/blob/7dac140024214c2189b72fad0566a0252d63e93c/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java#L1727).
 If none of this is needed, we can separate PR to clean up the useless methods.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


captainzmc commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459468457



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1706,21 +1706,24 @@ public boolean setOwner(String volume, String owner) 
throws IOException {
* Changes the Quota on a volume.
*
* @param volume - Name of the volume.
-   * @param quota  - Quota in bytes.
+   * @param storagespceQuota - Quota in bytes.
+   * @param namespaceQuota - Quota in counts.
* @throws IOException
*/
   @Override
-  public void setQuota(String volume, long quota) throws IOException {
-if (isAclEnabled) {
+  public void setQuota(String volume, long namespaceQuota,

Review comment:
   Similar to 
[setOwner](https://github.com/apache/hadoop-ozone/blob/7dac140024214c2189b72fad0566a0252d63e93c/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java#L1697),
 there already exists 
[setQuota](https://github.com/apache/hadoop-ozone/blob/7dac140024214c2189b72fad0566a0252d63e93c/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java#L1727).
 If none of this is needed, I think we can separate PR to clean up the useless 
methods.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4018) Datanode log spammed by NPE

2020-07-23 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-4018:
--

Assignee: Attila Doroszlai

> Datanode log spammed by NPE
> ---
>
> Key: HDDS-4018
> URL: https://issues.apache.org/jira/browse/HDDS-4018
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.6.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Blocker
>
> {code}
> datanode_1  | 2020-07-22 13:11:47,845 [Datanode State Machine Thread - 0] 
> WARN statemachine.StateContext: No available thread in pool for past 2 
> seconds.
> datanode_1  | 2020-07-22 13:11:47,846 [Datanode State Machine Thread - 0] 
> ERROR statemachine.DatanodeStateMachine: Unable to finish the execution.
> datanode_1  | java.lang.NullPointerException
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
> datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
> datanode_1  | 2020-07-22 13:11:47,847 [Datanode State Machine Thread - 0] 
> ERROR statemachine.DatanodeStateMachine: Unable to finish the execution.
> datanode_1  | java.lang.NullPointerException
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
> datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
> datanode_1  | 2020-07-22 13:11:47,848 [Datanode State Machine Thread - 0] 
> ERROR statemachine.DatanodeStateMachine: Unable to finish the execution.
> datanode_1  | java.lang.NullPointerException
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
> datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
> datanode_1  | 2020-07-22 13:11:47,848 [Datanode State Machine Thread - 0] 
> ERROR statemachine.DatanodeStateMachine: Unable to finish the execution.
> datanode_1  | java.lang.NullPointerException
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
> datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
> datanode_1  | 2020-07-22 13:11:47,851 [Datanode State Machine Thread - 0] 
> ERROR statemachine.DatanodeStateMachine: Unable to finish the execution.
> datanode_1  | java.lang.NullPointerException
> datanode_1  |   at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
> 

[jira] [Created] (HDDS-4019) Show the storageDir while need init om or scm

2020-07-23 Thread maobaolong (Jira)
maobaolong created HDDS-4019:


 Summary: Show the storageDir while need init om or scm
 Key: HDDS-4019
 URL: https://issues.apache.org/jira/browse/HDDS-4019
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager, SCM
Reporter: maobaolong
Assignee: maobaolong
 Fix For: 0.6.0


When you accident to use a wrong ozone-site.xml or wrong configure key 
ozone.metadata.dirs to a new dir, the scm or om cannot start and shown that 
'ozone om --init' or 'ozone scm --init' is needed, but you don't know the root 
cause is you are using a wrong config file or gave a wrong value of 
ozone.metadata.dirs. 

So we can show the current storageDir to user, so that user can aware the real 
problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3511) Fix definition of DelegationTokenTable in OmMetadataManagerImpl

2020-07-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3511:
-
Labels: newbie pull-request-available  (was: newbie)

> Fix definition of DelegationTokenTable in OmMetadataManagerImpl
> ---
>
> Key: HDDS-3511
> URL: https://issues.apache.org/jira/browse/HDDS-3511
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>
> The definition of 
> [dTokenTable|https://github.com/apache/hadoop-ozone/blob/876bec0130094b24472a7017fdb1fd81a65023bc/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java#L115]
>  should be fixed.
> And IMHO it could be OzoneTokenID -> renew_time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4018) Datanode log spammed by NPE

2020-07-23 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4018:
---
Description: 
{code}
datanode_1  | 2020-07-22 13:11:47,845 [Datanode State Machine Thread - 0] WARN 
statemachine.StateContext: No available thread in pool for past 2 seconds.
datanode_1  | 2020-07-22 13:11:47,846 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
datanode_1  | 2020-07-22 13:11:47,847 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
datanode_1  | 2020-07-22 13:11:47,848 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
datanode_1  | 2020-07-22 13:11:47,848 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
datanode_1  | 2020-07-22 13:11:47,851 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at 

[GitHub] [hadoop-ozone] aeioulisa opened a new pull request #1247: HDDS-3511. Fix definition of DelegationTokenTable in OmMetadataManagerImpl

2020-07-23 Thread GitBox


aeioulisa opened a new pull request #1247:
URL: https://github.com/apache/hadoop-ozone/pull/1247


   ## What changes were proposed in this pull request?
   Fix the definition of dTokenTable.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-3511
   
   ## How was this patch tested?
   Just fix definition, so we don't need to add test.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4018) Datanode log spammed by NPE

2020-07-23 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4018:
---
Description: 
{code}
datanode_1  | 2020-07-22 13:11:47,845 [Datanode State Machine Thread - 0] WARN 
statemachine.StateContext: No available thread in pool for past 2 seconds.
datanode_1  | 2020-07-22 13:11:47,846 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
datanode_1  | 2020-07-22 13:11:47,847 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
datanode_1  | 2020-07-22 13:11:47,848 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
datanode_1  | 2020-07-22 13:11:47,848 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
datanode_1  | 2020-07-22 13:11:47,851 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at 

[jira] [Created] (HDDS-4018) Datanode log spammed by NPE

2020-07-23 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-4018:
--

 Summary: Datanode log spammed by NPE
 Key: HDDS-4018
 URL: https://issues.apache.org/jira/browse/HDDS-4018
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.6.0
Reporter: Attila Doroszlai


{code}
datanode_1  | 2020-07-22 13:11:47,845 [Datanode State Machine Thread - 0] WARN 
statemachine.StateContext: No available thread in pool for past 2 seconds.
datanode_1  | 2020-07-22 13:11:47,846 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
datanode_1  | 2020-07-22 13:11:47,847 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
datanode_1  | 2020-07-22 13:11:47,848 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
datanode_1  | 2020-07-22 13:11:47,848 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:396)
datanode_1  |   at java.base/java.lang.Thread.run(Thread.java:834)
datanode_1  | 2020-07-22 13:11:47,851 [Datanode State Machine Thread - 0] ERROR 
statemachine.DatanodeStateMachine: Unable to finish the execution.
datanode_1  | java.lang.NullPointerException
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:218)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.await(RunningDatanodeState.java:50)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:451)
datanode_1  |   at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:225)

[jira] [Created] (HDDS-4017) Acceptance check may run against wrong commit

2020-07-23 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-4017:
--

 Summary: Acceptance check may run against wrong commit
 Key: HDDS-4017
 URL: https://issues.apache.org/jira/browse/HDDS-4017
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: build
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


For push builds, acceptance check may build and test a different commit than 
the one that was pushed.

The check for 
[HDDS-3991|https://github.com/apache/hadoop-ozone/commit/404ec6d0725cfe9c80aa912f150c6474037b10bb]
 built 
[HDDS-3933|https://github.com/apache/hadoop-ozone/commit/ff7b5a3367eccc0969bfd92a2cafe48899a2aaa5]:

{code:title=https://github.com/apache/hadoop-ozone/runs/898449998#step:4:30}
HEAD is now at ff7b5a336 HDDS-3933. Fix memory leak because of too many 
Datanode State Machine Thread (#1185)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3994) Write object when met exception can be slower than before

2020-07-23 Thread Lokesh Jain (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17163506#comment-17163506
 ] 

Lokesh Jain commented on HDDS-3994:
---

[~maobaolong] the new retry policy was added to provide more stability to 
client writes. There were a couple of problems related to the old retry policy.
# There were too frequent retries made by the client at very low interval (1 
second). This could increase load on the pipeline and increase memory and GC 
pressure.
# The client request could be retried for a long period of time i.e. couple of 
hours.

The new policy addressed these concerns with a well defined timeout for ratis 
client write operation on a pipeline. Further exponential backoff policy was 
added for addressing situations where pipeline is running low on resources. The 
exponential backoff policy is used in only low resource situations.

I think we should try these new policies and tune the default values for these 
configs. For the situation you have described I think we only need to tune 
config hdds.ratis.client.multilinear.random.retry.policy.
Possible defaults to try here are 
1s, 5, 10s, 5, 15s, 5, 20s, 5, 25s, 5, 60s, 10 and 1s, 10, 10s, 5, 15s, 5, 20s, 
5, 25s, 5, 60s, 10. The other configs should not have any impact on the problem 
you are seeing.

> Write object when met exception can be slower than before
> -
>
> Key: HDDS-3994
> URL: https://issues.apache.org/jira/browse/HDDS-3994
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
>
> After HDDS-3350 , the retry policy changed, and the client write performance 
> getting lower than before.
>  
> With HDDS-3350, I restore the method RatisHelper#createRetryPolicy to the 
> previous commit, it works well.
>  
> The previous is 
>  
> {code:java}
> static RetryPolicy createRetryPolicy(ConfigurationSource conf) {
> int maxRetryCount =
> conf.getInt(OzoneConfigKeys.DFS_RATIS_CLIENT_REQUEST_MAX_RETRIES_KEY,
> OzoneConfigKeys.
> DFS_RATIS_CLIENT_REQUEST_MAX_RETRIES_DEFAULT);
> long retryInterval = conf.getTimeDuration(OzoneConfigKeys.
> DFS_RATIS_CLIENT_REQUEST_RETRY_INTERVAL_KEY, OzoneConfigKeys.
> DFS_RATIS_CLIENT_REQUEST_RETRY_INTERVAL_DEFAULT
> .toIntExact(TimeUnit.MILLISECONDS), TimeUnit.MILLISECONDS);
> TimeDuration sleepDuration =
> TimeDuration.valueOf(retryInterval, TimeUnit.MILLISECONDS);
> RetryPolicy retryPolicy = RetryPolicies
> .retryUpToMaximumCountWithFixedSleep(maxRetryCount, sleepDuration);
> return retryPolicy;
>   }
> {code}
> When I switch logLevel to TRACE level, i see the following log While using 
> HDDS-3350
> 2020-07-21 12:56:11,822 [grpc-default-executor-5] TRACE impl.OrderedAsync: 
> client-6F623ADF656D: Failed* 
> RaftClientRequest:client-6F623ADF656D->207b98d9-ad64-45a8-940f-504b514feff5@group-83A28012848F,
>  cid=2876, seq=1*, Watch(0), null
> java.util.concurrent.CompletionException: 
> org.apache.ratis.protocol.LeaderNotReadyException: 
> 207b98d9-ad64-45a8-940f-504b514feff5@group-83A28012848F is in LEADER state 
> but not ready yet.
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers.completeReplyExceptionally(GrpcClientProtocolClient.java:358)
> at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers.access$000(GrpcClientProtocolClient.java:264)
> at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers$1.onNext(GrpcClientProtocolClient.java:283)
> at 
> org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers$1.onNext(GrpcClientProtocolClient.java:269)
> at 
> org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onMessage(ClientCalls.java:436)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessagesAvailable.runInternal(ClientCallImpl.java:658)
> at 
> 

[GitHub] [hadoop-ozone] ChenSammi removed a comment on pull request #1012: HDDS-3658. Stop to persist container related pipeline info of each ke…

2020-07-23 Thread GitBox


ChenSammi removed a comment on pull request #1012:
URL: https://github.com/apache/hadoop-ozone/pull/1012#issuecomment-662971135


   /retest



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] github-actions[bot] commented on pull request #1012: HDDS-3658. Stop to persist container related pipeline info of each ke…

2020-07-23 Thread GitBox


github-actions[bot] commented on pull request #1012:
URL: https://github.com/apache/hadoop-ozone/pull/1012#issuecomment-662971612


   To re-run CI checks, please follow these steps with the source branch 
checked out:
   ```
   git commit --allow-empty -m 'trigger new CI check'
   git push
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1012: HDDS-3658. Stop to persist container related pipeline info of each ke…

2020-07-23 Thread GitBox


ChenSammi commented on pull request #1012:
URL: https://github.com/apache/hadoop-ozone/pull/1012#issuecomment-662971135


   /retest



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4014) FLAKY-UT: TestCommitWatcher#testReleaseBuffersOnException

2020-07-23 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17163471#comment-17163471
 ] 

Attila Doroszlai commented on HDDS-4014:


[~maobaolong], can you please link to the run where it failed?  This should 
have been fixed by HDDS-3986.

> FLAKY-UT: TestCommitWatcher#testReleaseBuffersOnException
> -
>
> Key: HDDS-4014
> URL: https://issues.apache.org/jira/browse/HDDS-4014
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Priority: Major
>
> [INFO] Running org.apache.hadoop.ozone.client.rpc.TestCommitWatcher
> [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 88.944 s <<< FAILURE! - in 
> org.apache.hadoop.ozone.client.rpc.TestCommitWatcher
> [ERROR] 
> testReleaseBuffersOnException(org.apache.hadoop.ozone.client.rpc.TestCommitWatcher)
>   Time elapsed: 47.165 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestCommitWatcher.testReleaseBuffersOnException(TestCommitWatcher.java:320)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


captainzmc commented on pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#issuecomment-662955406


   > Thanks @captainzmc for the work.
   > Overall the functionality-part is good and some suggestions is added 
inline, test-related-part would be reviewed soon.
   > 
   > Here I have two Questions IMHO.
   > 
   > (1) Could we let the variable-pair be consistent ?
   > I think it would better to use one of (`storagespaceQuota`/ 
`namespaceQuota`) and (`quotaInBytes`/ `quotaInCounts`).
   > If it have other purpose to use both, please let me know.
   > 
   > (2) Would we add limitation of creating volume and bucket in later patch ?
   > I deployed this patch on my machine, and created volume without setting 
both of quota.
   > I found that the storage of volume exceeds my local-storage.
   > And I found we could create bucket even if the quota of bucket is -1.
   
   Thanks for @cxorm 's feedback.
   1. I will revise the unified naming and other review issues as soon as 
possible.
   2. Currently, quota can only be set, but does not take effect. [This task 
lists plans and designs](https://issues.apache.org/jira/browse/HDDS-541). I 
will continue to improve this part in the future.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-671) Hive HSI insert tries to create data in Hdfs for Ozone external table

2020-07-23 Thread Istvan Fajth (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth resolved HDDS-671.
---
Resolution: Not A Bug

This one does not seem to be a bug, but a permission issue only, which probably 
already have changed.

When you run a Hive query, the underlying yarn job has to have the Hive related 
classpath elements, and therefore needs to access them, afaik these resources 
are usually reside on HDFS, and sometimes copied together into a temporary 
directory for all the containers to ensure access to the runtime dependencies. 
Based on the code path visible in the exception, I think this logic is 
collecting things to HDFS into the home directory of the user running the job, 
as said by Arpit most likely because the default FS is still HDFS in this case.
Not sure why the username became anonymous in this case, but probably that is 
not the case anymore.

I am closing this as not a bug for now, feel free to reopen if anyone disagrees.

> Hive HSI insert tries to create data in Hdfs for Ozone external table
> -
>
> Key: HDDS-671
> URL: https://issues.apache.org/jira/browse/HDDS-671
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Major
>  Labels: app-compat
>
> Hive HSI insert tries to create data in Hdfs for Ozone external table, when 
> "hive.server2.enable.doAs" is set to true 
> Exception details in comment below.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4004) Remove unused jersey-json from transitive Hadoop dependencies

2020-07-23 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-4004.

Fix Version/s: 0.7.0
   Resolution: Fixed

> Remove unused jersey-json from transitive Hadoop dependencies
> -
>
> Key: HDDS-4004
> URL: https://issues.apache.org/jira/browse/HDDS-4004
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> jersey-json is inherited from hadoop-common but it's not used by Ozone (and 
> not used by the active code-path of hadoop-common).
> As it's an older jersey-json seems to be better to remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4004) Remove unused jersey-json from transitive Hadoop dependencies

2020-07-23 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4004:
---
Labels:   (was: pull-request-available)

> Remove unused jersey-json from transitive Hadoop dependencies
> -
>
> Key: HDDS-4004
> URL: https://issues.apache.org/jira/browse/HDDS-4004
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Fix For: 0.7.0
>
>
> jersey-json is inherited from hadoop-common but it's not used by Ozone (and 
> not used by the active code-path of hadoop-common).
> As it's an older jersey-json seems to be better to remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on pull request #1075: HDDS-3369. Cleanup old write-path of volume in OM

2020-07-23 Thread GitBox


cxorm commented on pull request #1075:
URL: https://github.com/apache/hadoop-ozone/pull/1075#issuecomment-662881106


   Sorry for I didn't have computer for last two weeks :cry: 
   Looking on this PR again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1241: HDDS-4006. Disallow MPU on encrypted buckets.

2020-07-23 Thread GitBox


ChenSammi commented on pull request #1241:
URL: https://github.com/apache/hadoop-ozone/pull/1241#issuecomment-662879210


   The patch LGTM +1. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1236: HDDS-4000. Split acceptance tests to reduce CI feedback time

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1236:
URL: https://github.com/apache/hadoop-ozone/pull/1236#discussion_r459284202



##
File path: hadoop-ozone/dist/src/main/compose/ozonesecure-om-ha/test.sh
##
@@ -15,6 +15,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+

Review comment:
   Seems we could not add this line here :wink: 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#issuecomment-66287


   Sorry for my careless huge "Add single comment" yesterday.
   
   There are Test-related-part and API-backward-compatibility-part to be 
reviewed. (If anyone continue reviewing after this comment.)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #1245: HDDS-4011. Update S3 related documentation.

2020-07-23 Thread GitBox


adoroszlai commented on a change in pull request #1245:
URL: https://github.com/apache/hadoop-ozone/pull/1245#discussion_r459269878



##
File path: hadoop-hdds/docs/content/start/StartFromDockerHub.md
##
@@ -72,7 +72,7 @@ connecting to the SCM's UI at 
[http://localhost:9876](http://localhost:9876).
 
 The S3 gateway endpoint will be exposed at port 9878. You can use Ozone's S3
 support as if you are working against the real S3.  S3 buckets are stored under
-the `/s3v` volume, which needs to be created by an administrator first:
+the `/s3v` volume:
 
 ```
 docker-compose exec scm ozone sh volume create /s3v

Review comment:
   This code block can be removed, too.

##
File path: hadoop-hdds/docs/content/interface/S3.md
##
@@ -24,7 +24,7 @@ summary: Ozone supports Amazon's Simple Storage Service (S3) 
protocol. In fact,
 
 Ozone provides S3 compatible REST interface to use the object store data with 
any S3 compatible tools.
 
-S3 buckets are stored under the `/s3v`(Default is s3v, which can be setted 
through ozone.s3g.volume.name) volume, which needs to be created by an 
administrator first.
+S3 buckets are stored under the `/s3v`(Default is s3v, which can be setted 
through ozone.s3g.volume.name) volume.

Review comment:
   I think we can improve wording here:
   
   ```suggestion
   S3 buckets are stored under the `/s3v` volume.  The default name `s3v` can 
be changed by setting the `ozone.s3g.volume.name` config property in 
`ozone-site.xml`.
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm removed a comment on pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm removed a comment on pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#issuecomment-662824611


   Thanks @captainzmc for the work.
   Overall the functionalily-part is good and some suggestions is added inline, 
test-related-part would be reviewd soon.
   
   Here I have two Questions IMHO.
   
   (1) Could we let the variable-pair be consistent ?
   I think it would better to use one of (`storagespaceQuota`/ 
`namespaceQuota`) and (`quotaInBytes`/ `quotaInCounts`).
   If it have other purpose to use both, please let me know.
   
   (2) Would we add limitation of creating volume and bucket in later patch ?
   I deployed this patch on my machine, and created volume without setting both 
of quota.
   I found that the storage of volume exceeds my local-storage.
   And I found we could create bucket even if the quota of bucket is -1.
   ![create_bucket](https://i.imgur.com/oN1Jf4X.png)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459266753



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
##
@@ -204,6 +204,16 @@ public static Versioning getVersioning(boolean versioning) 
{
*/
   public static final long MAX_QUOTA_IN_BYTES = 1024L * 1024 * TB;
 
+  /**
+   * Quota value.
+   */
+  public static final long QUOTA_COUNT_RESET = -1;

Review comment:
   Could you please update the comment to "Quota of bucket counts" and add 
the description of meaning of "-1" here ?

##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java
##
@@ -25,26 +25,75 @@
  * represents an OzoneQuota Object that can be applied to
  * a storage volume.
  */
-public class OzoneQuota {
+public final class OzoneQuota {
 
   public static final String OZONE_QUOTA_BYTES = "BYTES";
+  public static final String OZONE_QUOTA_KB = "KB";
   public static final String OZONE_QUOTA_MB = "MB";
   public static final String OZONE_QUOTA_GB = "GB";
   public static final String OZONE_QUOTA_TB = "TB";
 
-  private Units unit;
-  private long size;
-
   /** Quota Units.*/
   public enum Units {UNDEFINED, BYTES, KB, MB, GB, TB}
 
+  // Quota to decide how many buckets or keys can be created.
+  private long namespaceQuota;
+  // Quota to decide how many storage space will be used in bytes.
+  private long storagespaceQuota;
+  private RawStorageSpaceQuota rawStoragespaceQuota;
+
+  private static class RawStorageSpaceQuota {

Review comment:
   The RawStorageSpace is a little confuse. Here we could add comment 
addressed to the class `RawStorageSpaceQuota` IMHO.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
##
@@ -273,12 +272,14 @@ private void setOwnerCommitToDB(UserVolumeInfo 
oldOwnerVolumeList,
* Changes the Quota on a volume.
*
* @param volume - Name of the volume.
-   * @param quota - Quota in bytes.
+   * @param namespaceQuota - Quota in count for bucket.
+   * @param storagespaceQuota - Quota in bytes.
*
* @throws IOException
*/
   @Override
-  public void setQuota(String volume, long quota) throws IOException {
+  public void setQuota(String volume, long namespaceQuota,

Review comment:
   This part is the same as comments on `OzoneManager.java`

##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java
##
@@ -25,26 +25,75 @@
  * represents an OzoneQuota Object that can be applied to
  * a storage volume.
  */
-public class OzoneQuota {
+public final class OzoneQuota {
 
   public static final String OZONE_QUOTA_BYTES = "BYTES";
+  public static final String OZONE_QUOTA_KB = "KB";
   public static final String OZONE_QUOTA_MB = "MB";
   public static final String OZONE_QUOTA_GB = "GB";
   public static final String OZONE_QUOTA_TB = "TB";
 
-  private Units unit;
-  private long size;
-
   /** Quota Units.*/
   public enum Units {UNDEFINED, BYTES, KB, MB, GB, TB}
 
+  // Quota to decide how many buckets or keys can be created.

Review comment:
   ```suggestion
 // Quota to decide how many buckets can be created.
   ```

##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/StringUtils.java
##
@@ -149,6 +150,13 @@ public static String 
createStartupShutdownMessage(VersionInfo versionInfo,
 "  java = " + System.getProperty("java.version"));
   }
 
+  /**
+   * The same as String.format(Locale.ENGLISH, format, objects).
+   */
+  public static String format(final String format, final Object... objects) {
+return String.format(Locale.ENGLISH, format, objects);
+  }
+

Review comment:
   I didn't found usage of this method in all places.
   Could you tell me why we add this part if I miss something ?

##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/CreateVolumeHandler.java
##
@@ -58,9 +62,12 @@ protected void execute(OzoneClient client, OzoneAddress 
address)
 VolumeArgs.Builder volumeArgsBuilder = VolumeArgs.newBuilder()
 .setAdmin(adminName)
 .setOwner(ownerName);
-if (quota != null) {
-  volumeArgsBuilder.setQuota(quota);
+if (storagespaceQuota!= null) {

Review comment:
   ```suggestion
   if (storagespaceQuota != null) {
   ```

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/VolumeArgs.java
##
@@ -33,25 +33,30 @@
 
   private final String admin;
   private final String owner;
-  private final String quota;
+  private final String storagespaceQuota;
+  private final long namespaceQuota;
   private final List acls;
   private Map metadata;
 
   /**
* Private constructor, constructed via builder.
* @param admin Administrator's name.
* @param owner Volume owner's name
-   * @param 

[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459200596



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/VolumeArgs.java
##
@@ -122,8 +136,13 @@ public String getQuota() {
   return this;
 }
 
-public VolumeArgs.Builder setQuota(String quota) {
-  this.volumeQuota = quota;
+public VolumeArgs.Builder setStoragespaceQuota(String quota) {
+  this.storagespaceQuota = quota;
+  return this;
+}
+
+public VolumeArgs.Builder setNamespaceQuotaQuota(long quota) {

Review comment:
   ```suggestion
   public VolumeArgs.Builder setNamespaceQuota(long quota) {
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459200918



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/CreateVolumeHandler.java
##
@@ -58,9 +62,12 @@ protected void execute(OzoneClient client, OzoneAddress 
address)
 VolumeArgs.Builder volumeArgsBuilder = VolumeArgs.newBuilder()
 .setAdmin(adminName)
 .setOwner(ownerName);
-if (quota != null) {
-  volumeArgsBuilder.setQuota(quota);
+if (storagespaceQuota!= null) {
+  volumeArgsBuilder.setStoragespaceQuota(storagespaceQuota);
 }
+
+volumeArgsBuilder.setNamespaceQuotaQuota(namespaceQuota);

Review comment:
   The function name seems a little redundant.
   
   Suggestion is commented on `VolumeArgs.java`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459196128



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/CreateVolumeHandler.java
##
@@ -58,9 +62,12 @@ protected void execute(OzoneClient client, OzoneAddress 
address)
 VolumeArgs.Builder volumeArgsBuilder = VolumeArgs.newBuilder()
 .setAdmin(adminName)
 .setOwner(ownerName);
-if (quota != null) {
-  volumeArgsBuilder.setQuota(quota);
+if (storagespaceQuota!= null) {

Review comment:
   ```suggestion
   if (storagespaceQuota != null) {
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459194918



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
##
@@ -273,12 +272,14 @@ private void setOwnerCommitToDB(UserVolumeInfo 
oldOwnerVolumeList,
* Changes the Quota on a volume.
*
* @param volume - Name of the volume.
-   * @param quota - Quota in bytes.
+   * @param namespaceQuota - Quota in count for bucket.
+   * @param storagespaceQuota - Quota in bytes.
*
* @throws IOException
*/
   @Override
-  public void setQuota(String volume, long quota) throws IOException {
+  public void setQuota(String volume, long namespaceQuota,
+   long storagespaceQuota) throws IOException {

Review comment:
   This part is the same as comments on `OzoneManager.java`
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459192304



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1706,21 +1706,24 @@ public boolean setOwner(String volume, String owner) 
throws IOException {
* Changes the Quota on a volume.
*
* @param volume - Name of the volume.
-   * @param quota  - Quota in bytes.
+   * @param storagespceQuota - Quota in bytes.
+   * @param namespaceQuota - Quota in counts.
* @throws IOException
*/
   @Override
-  public void setQuota(String volume, long quota) throws IOException {
-if (isAclEnabled) {
+  public void setQuota(String volume, long namespaceQuota,

Review comment:
   Instead of `OzoneManager.java`, this request is processed in 
`OMVolumeSetQuotaRequest.java` .
   
   We could not update code-snippet here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459194878



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManager.java
##
@@ -49,10 +49,12 @@ void setOwner(String volume, String owner)
* Changes the Quota on a volume.
*
* @param volume - Name of the volume.
-   * @param quota - Quota in bytes.
+   * @param namespaceQuota - Quota in counts.
+   * @param storagespaceQuota - Quota in bytes.
* @throws IOException
*/
-  void setQuota(String volume, long quota) throws IOException;
+  void setQuota(String volume, long namespaceQuota,

Review comment:
   This part is the same as comments on `OzoneManager.java `





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459202326



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/VolumeArgs.java
##
@@ -33,25 +33,30 @@
 
   private final String admin;
   private final String owner;
-  private final String quota;
+  private final String storagespaceQuota;
+  private final long namespaceQuota;
   private final List acls;
   private Map metadata;
 
   /**
* Private constructor, constructed via builder.
* @param admin Administrator's name.
* @param owner Volume owner's name
-   * @param quota Volume Quota.
+   * @param storagespaceQuota Volume Quota.
+   * @param namespaceQuota Volume Quota.

Review comment:
   I think we could update the description of the parameter consistent with 
shell-command IMHO.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1237: HDDS-3998. Shorten Ozone FS Hadoop compatibility module names

2020-07-23 Thread GitBox


adoroszlai commented on pull request #1237:
URL: https://github.com/apache/hadoop-ozone/pull/1237#issuecomment-662865377


   Thanks @bharatviswa504 for reviewing and merging it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459202494



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/VolumeArgs.java
##
@@ -73,11 +78,19 @@ public String getOwner() {
   }
 
   /**
-   * Returns Volume Quota.
-   * @return Quota.
+   * Returns Volume Quota in bytes.
+   * @return storagespaceQuota.
*/
-  public String getQuota() {
-return quota;
+  public String getStoragespaceQuota() {
+return storagespaceQuota;
+  }
+
+  /**
+   * Returns Volume Quota in counts.

Review comment:
   ```suggestion
  * Returns Volume Quota in bucket counts.
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459214380



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/StringUtils.java
##
@@ -149,6 +150,13 @@ public static String 
createStartupShutdownMessage(VersionInfo versionInfo,
 "  java = " + System.getProperty("java.version"));
   }
 
+  /**
+   * The same as String.format(Locale.ENGLISH, format, objects).
+   */
+  public static String format(final String format, final Object... objects) {
+return String.format(Locale.ENGLISH, format, objects);
+  }
+

Review comment:
   I didn't found usage of this method in all places.
   Could you tell me why we add this part if I miss something ? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1239: HDDS-3827. Intermittent failure in TestKeyManagerUnit#listMultipartUploads

2020-07-23 Thread GitBox


adoroszlai commented on pull request #1239:
URL: https://github.com/apache/hadoop-ozone/pull/1239#issuecomment-662865142


   Thanks @bharatviswa504 for reviewing and merging it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459222501



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/UpdateVolumeHandler.java
##
@@ -39,22 +40,34 @@
   description = "Owner of the volume to set")
   private String ownerName;
 
-  @Option(names = {"--quota"},
-  description = "Quota of the volume to set"
-  + "(eg. 1G)")
-  private String quota;
+  @Option(names = {"--spaceQuota", "-s"},
+  description = "Quota in bytes of the volume to set (eg. 1GB)")
+  private String storagespaceQuota;
+
+  @Option(names = {"--quota", "-q"},
+  description = "Bucket counts of the volume to set (eg. 5)")
+  private long namespaceQuota = OzoneConsts.QUOTA_COUNT_RESET;
 
   @Override
   protected void execute(OzoneClient client, OzoneAddress address)
   throws IOException {
-
 String volumeName = address.getVolumeName();
-
 OzoneVolume volume = client.getObjectStore().getVolume(volumeName);
-if (quota != null && !quota.isEmpty()) {
-  volume.setQuota(OzoneQuota.parseQuota(quota));
+
+long spaceQuota = volume.getStoragespaceQuota();
+long countQuota = volume.getNamespaceQuota();
+
+if (storagespaceQuota != null && !storagespaceQuota.isEmpty()) {
+  spaceQuota = OzoneQuota.parseQuota(storagespaceQuota,
+  namespaceQuota).getStoragespaceQuota();
+}
+if (namespaceQuota >= 0) {
+  countQuota = namespaceQuota;
 }
 
+volume.setQuota(
+OzoneQuota.getOzoneQuota(spaceQuota, countQuota));

Review comment:
   We could not violate `checkstyle` if we use one line here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459244192



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java
##
@@ -25,26 +25,75 @@
  * represents an OzoneQuota Object that can be applied to
  * a storage volume.
  */
-public class OzoneQuota {
+public final class OzoneQuota {
 
   public static final String OZONE_QUOTA_BYTES = "BYTES";
+  public static final String OZONE_QUOTA_KB = "KB";
   public static final String OZONE_QUOTA_MB = "MB";
   public static final String OZONE_QUOTA_GB = "GB";
   public static final String OZONE_QUOTA_TB = "TB";
 
-  private Units unit;
-  private long size;
-
   /** Quota Units.*/
   public enum Units {UNDEFINED, BYTES, KB, MB, GB, TB}
 
+  // Quota to decide how many buckets or keys can be created.

Review comment:
   ```suggestion
 // Quota to decide how many buckets can be created.
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459245623



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java
##
@@ -25,26 +25,75 @@
  * represents an OzoneQuota Object that can be applied to
  * a storage volume.
  */
-public class OzoneQuota {
+public final class OzoneQuota {
 
   public static final String OZONE_QUOTA_BYTES = "BYTES";
+  public static final String OZONE_QUOTA_KB = "KB";
   public static final String OZONE_QUOTA_MB = "MB";
   public static final String OZONE_QUOTA_GB = "GB";
   public static final String OZONE_QUOTA_TB = "TB";
 
-  private Units unit;
-  private long size;
-
   /** Quota Units.*/
   public enum Units {UNDEFINED, BYTES, KB, MB, GB, TB}
 
+  // Quota to decide how many buckets or keys can be created.
+  private long namespaceQuota;
+  // Quota to decide how many storage space will be used in bytes.
+  private long storagespaceQuota;
+  private RawStorageSpaceQuota rawStoragespaceQuota;
+
+  private static class RawStorageSpaceQuota {

Review comment:
   The `RawStorageSpace` is a little confuse.
   Here we could add comment addressed to the class `RawStorageSpaceQuota` IMHO.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-07-23 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r459260086



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
##
@@ -204,6 +204,16 @@ public static Versioning getVersioning(boolean versioning) 
{
*/
   public static final long MAX_QUOTA_IN_BYTES = 1024L * 1024 * TB;
 
+  /**
+   * Quota value.
+   */
+  public static final long QUOTA_COUNT_RESET = -1;

Review comment:
   Could you please update the comment to "Quota of bucket counts" and add 
the description of meaning of "-1" here ?

##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java
##
@@ -82,25 +133,28 @@ public OzoneQuota(long size, Units unit) {
* @return string representation of quota
*/
   public static String formatQuota(OzoneQuota quota) {
-return String.valueOf(quota.size) + quota.unit;
+return String.valueOf(quota.getRawSize())+ quota.getUnit();
   }
 
   /**
* Parses a user provided string and returns the
* Quota Object.
*
-   * @param quotaString Quota String
+   * @param storagespaceQuotaStr Storage space Quota String
+   * @param namespaceQuota namespace Quota
*
* @return OzoneQuota object
*/
-  public static OzoneQuota parseQuota(String quotaString) {
+  public static OzoneQuota parseQuota(String storagespaceQuotaStr,
+  long namespaceQuota) {

Review comment:
   ```suggestion
 public static OzoneQuota parseQuota(String storagespaceQuotaStr,
 long namespaceQuota) {
   ```

##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java
##
@@ -53,26 +102,28 @@ public long getSize() {
* @return Unit in MB, GB or TB
*/
   public Units getUnit() {
-return unit;
+return this.rawStoragespaceQuota.getUnit();
   }
 
   /**
* Constructs a default Quota object.
*/
-  public OzoneQuota() {
-this.size = 0;
-this.unit = Units.UNDEFINED;
+  private OzoneQuota() {
+this.namespaceQuota = OzoneConsts.QUOTA_COUNT_RESET;
+this.storagespaceQuota = OzoneConsts.MAX_QUOTA_IN_BYTES;
   }
 
   /**
* Constructor for Ozone Quota.
*
-   * @param size Long Size
-   * @param unit MB, GB  or TB
+   * @param namespaceQuota long value
+   * @param rawStoragespaceQuota RawStorageSpaceQuota value
*/
-  public OzoneQuota(long size, Units unit) {
-this.size = size;
-this.unit = unit;
+  private OzoneQuota(long namespaceQuota,
+ RawStorageSpaceQuota rawStoragespaceQuota) {

Review comment:
   ```suggestion
 private OzoneQuota(long namespaceQuota,
 RawStorageSpaceQuota rawStoragespaceQuota) {
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



  1   2   >