[ 
https://issues.apache.org/jira/browse/HDDS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553385#comment-16553385
 ] 

Tsz Wo Nicholas Sze commented on HDDS-203:
------------------------------------------

Thanks Shash for the new patch.  Some thoughts:
- It seems quite expensive to parse and create the entire KeyData object in 
order get the block length.  How about either (1) storing the size as a field 
in KeyData or (2) retrieving the chunk sizes without prasing KeyData object?  
It seems that (1) is better although it needs more works.
- Let's have blockID in GetCommittedBlockLengthResponseProto.


> Add getCommittedBlockLength API in datanode
> -------------------------------------------
>
>                 Key: HDDS-203
>                 URL: https://issues.apache.org/jira/browse/HDDS-203
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Client, Ozone Datanode
>            Reporter: Shashikant Banerjee
>            Assignee: Shashikant Banerjee
>            Priority: Major
>             Fix For: 0.2.1
>
>         Attachments: HDDS-203.00.patch, HDDS-203.01.patch, HDDS-203.02.patch, 
> HDDS-203.03.patch, HDDS-203.04.patch
>
>
> When a container gets closed on the Datanode while the active Writes are 
> happening by OzoneClient, Client Write requests will fail with 
> ContainerClosedException. In such case, ozone Client needs to enquire the 
> last committed block length from dataNodes and update the OzoneMaster with 
> the updated length for the block. This Jira proposes to add to RPC call to 
> get the last committed length of a block on a Datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to