[ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685505#comment-16685505
 ] 

Mukul Kumar Singh commented on HDDS-675:
----------------------------------------

Thanks for working on this [~shashikant], I am +1 on the patch, some of the 
tasks can be done in a followup as well.

1) ChunkGroupOutputStream:136, there is an extra `""` in the line, this is not 
needed.
2) ChunkGroupOutputStream:149, the function does not throws IOException,  it 
can be removed from the function signature.
3) for streamBufferFlushSize, streamBufferMaxSize, blockSize, lets use the 
getStorageSize in place for getLong, this can be done in a later patch as well.

> Add blocking buffer and use watchApi for flush/close in OzoneClient
> -------------------------------------------------------------------
>
>                 Key: HDDS-675
>                 URL: https://issues.apache.org/jira/browse/HDDS-675
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Client
>            Reporter: Shashikant Banerjee
>            Assignee: Shashikant Banerjee
>            Priority: Major
>         Attachments: HDDS-675.000.patch, HDDS-675.001.patch, 
> HDDS-675.002.patch, HDDS-675.003.patch, HDDS-675.004.patch, 
> HDDS-675.005.patch, HDDS-675.006.patch
>
>
> For handling 2 node failures, a blocking buffer will be used which will wait 
> for the flush commit index to get updated on all replicas of a container via 
> Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to