[
https://issues.apache.org/jira/browse/HDDS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16591176#comment-16591176
]
Mukul Kumar Singh commented on HDDS-247:
----------------------------------------
Thanks for working on this [~shashikant] and updating the patch. Please find my
review comment as following.
1) ChunkGroupOutputStream, the #close & #flush have almost all the code shared,
please change the signature of handleClose to accept a supplier/argument to
identify the correct op.
2) handleWrite can update the block length twice, once during handling of
closeContainerException and then again in handleWrite#309.
3) Also in case of exception, as the block length is being correctly updated
inside handleCloseContainerException, I feel getEffectiveDataWritten is not
necessary.
> Handle CLOSED_CONTAINER_IO exception in ozoneClient
> ---------------------------------------------------
>
> Key: HDDS-247
> URL: https://issues.apache.org/jira/browse/HDDS-247
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Components: Ozone Client
> Reporter: Shashikant Banerjee
> Assignee: Shashikant Banerjee
> Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-247.00.patch, HDDS-247.01.patch, HDDS-247.02.patch,
> HDDS-247.03.patch, HDDS-247.04.patch, HDDS-247.05.patch, HDDS-247.06.patch,
> HDDS-247.07.patch, HDDS-247.08.patch, HDDS-247.09.patch
>
>
> In case of ongoing writes by Ozone client to a container, the container might
> get closed on the Datanodes because of node loss, out of space issues etc. In
> such cases, the operation will fail with CLOSED_CONTAINER_IO exception. In
> cases as such, ozone client should try to get the committed length of the
> block from the Datanodes, and update the OM. This Jira aims to address this
> issue.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]