[
https://issues.apache.org/jira/browse/HDDS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16566901#comment-16566901
]
Shashikant Banerjee commented on HDDS-247:
------------------------------------------
Uploaded patch v1. There are some issues pending still need to be addressed :
Ozone outputStream flushes the data to dataNodes once it reaches the chunkSize
limit. In case, whereas the client writes data where the data is partially
flushed to the Datanode while remaining data resides in the streamBuffer and
meanwhile the container gets closed. Further writes/flush/close need to
allocate new blocks and copy the remaining data from the stream buffer and
write it back again.
> Handle CLOSED_CONTAINER_IO exception in ozoneClient
> ---------------------------------------------------
>
> Key: HDDS-247
> URL: https://issues.apache.org/jira/browse/HDDS-247
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Components: Ozone Client
> Reporter: Shashikant Banerjee
> Assignee: Shashikant Banerjee
> Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-247.00.patch, HDDS-247.01.patch
>
>
> In case of ongoing writes by Ozone client to a container, the container might
> get closed on the Datanodes because of node loss, out of space issues etc. In
> such cases, the operation will fail with CLOSED_CONTAINER_IO exception. In
> cases as such, ozone client should try to get the committed length of the
> block from the Datanodes, and update the KSM. This Jira aims to address this
> issue.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]