[jira] [Comment Edited] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-19 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692675#comment-16692675
 ] 

Shashikant Banerjee edited comment on HDDS-835 at 11/20/18 5:02 AM:


Thanks [~msingh], for the review.
{code:java}
ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well{code}
Since, OZONE_SCM_CHUNK_MAX_SIZE is constant, moved it to OzoneConsts.java
{code:java}
TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
{code}
BlockSize is already to set to required value when miniOzoneCluster instance is 
created. No need to set it here.

Rest of the review comments are addressed.

 


was (Author: shashikant):
Thanks [~msingh], for the review.

 
{code:java}
ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well{code}
Since, OZONE_SCM_CHUNK_MAX_SIZE is constant, moved it to OzoneConsts.java
{code:java}
TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
{code}
BlockSize is already to set to required value when miniOzoneCluster instance is 
created. No need to set it here.

Rest of the review comments are addressed.

 

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch, HDDS-835.001.patch
>
>
> As per [~msingh] review comments in HDDS-675 , for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-19 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16692063#comment-16692063
 ] 

Mukul Kumar Singh edited comment on HDDS-835 at 11/19/18 5:56 PM:
--

Thanks for working on this [~shashikant], the patch looks really good to me.
There are some checkstyle issues with the patch. Some minor comments on the 
patch.

1) ozone-default.xml:627, this value should be 256MB i think
2) ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well
3) TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
4) XceiverServerRatis, can we also use the size config in newRaftProperties ?, 
this will help in cleaning up config handling.


was (Author: msingh):
Thanks for working on this [~shashikant].
There are some checkstyle issues with the patch.

1) ozone-default.xml:627, this value should be 256MB i think
2) ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well
3) TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
4) XceiverServerRatis, can we also use the size config in newRaftProperties ?, 
this will help in cleaning up config handling.

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch
>
>
> As per [~msingh] review comments in HDDS-675 , for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org