[
https://issues.apache.org/jira/browse/HDDS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16593451#comment-16593451
]
Nanda kumar commented on HDDS-247:
----------------------------------
Thanks [~shashikant] for updating the patch. It looks very good to me.
* In {{ChunkGroupOutputStream#handleCloseContainerException}}, we assume that
only if {{streamEntry.currentPosition >= chunkSize}} we have written some data
to datanode. We skip getting the blockLength from datanode if the condition
fails. If someone has called flush before the stream has reached the configured
chunkSize, we would have already written some data to datanode. This case
should be handled. Apart from checking {{streamEntry.currentPosition >=
chunkSize}}, we should also check if {{streamEntry.currentPosition !=
buffer.position()}}. This will make sure that we are not missing any data that
has been flushed to datanode.
* ChunkGroupOutputStream:L374 {{currentStreamIndex}} should never be 0 at this
point, {{if}} condition check should be replaced with
{{Preconditions#checkNotNull}}.
* Add javadoc for {{ChunkGroupOutputStream#handleFlushOrClose}}, this will
help in understanding the boolean flag that is passed.
* {{ChunkGroupOutputStream#handleFlushOrClose}}: Infinite wile loop can be
replaced with recursive call after {{handleCloseContainerException}} call.
* {{ChunkGroupOutputStream#ChunkOutputStreamEntry#getBuffer}}: We don't need
the null check, {{instanceof}} will also handle null check. Also, we should
throw IOException instead of returning null when the stream is not an instance
of {{ChunkOutputStream}}.
Nitpicks:
* {{ChunkOutputStream#getBuffer}}: Move this method below the constructor.
* OmKeyInfo:L132 typo – {{very}} -> {{vary}}
> Handle CLOSED_CONTAINER_IO exception in ozoneClient
> ---------------------------------------------------
>
> Key: HDDS-247
> URL: https://issues.apache.org/jira/browse/HDDS-247
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Components: Ozone Client
> Reporter: Shashikant Banerjee
> Assignee: Shashikant Banerjee
> Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-247.00.patch, HDDS-247.01.patch, HDDS-247.02.patch,
> HDDS-247.03.patch, HDDS-247.04.patch, HDDS-247.05.patch, HDDS-247.06.patch,
> HDDS-247.07.patch, HDDS-247.08.patch, HDDS-247.09.patch, HDDS-247.10.patch,
> HDDS-247.11.patch
>
>
> In case of ongoing writes by Ozone client to a container, the container might
> get closed on the Datanodes because of node loss, out of space issues etc. In
> such cases, the operation will fail with CLOSED_CONTAINER_IO exception. In
> cases as such, ozone client should try to get the committed length of the
> block from the Datanodes, and update the OM. This Jira aims to address this
> issue.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]