[
https://issues.apache.org/jira/browse/HDDS-328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16576956#comment-16576956
]
Hanisha Koneru commented on HDDS-328:
-------------------------------------
Thanks [~elek] for updating the patch.
The patch LGTM overall. I have few more comments:
* In KeyValueContainer#importContainerData(), we are unpacking the tar and
placing all the contents in the new node (including the .container file).
{code:java}
Line 374: packer.unpack(this, input);{code}
We are updating the correct values (new path locations) for the container yaml
later on in Line 392.
{code:java}
Line 392: update(originalContainerData.getMetadata(), true);{code}
In case any error occurs between these two steps, we will have a wrong
.container file in the new node.
This can be avoided in two ways:
## Delete the .container file in case any exception occurs
## Write a new .container file as the last step (just before adding the
container to containerSet). We do not copy it from the tar in the unpack step.
Instead, we create a new .container file.
In the 1st method, we end up reading the .container file twice - once to get
the maxSize and once again to copy it to the new location. So I prefer the 2nd
method.
* In case any error does occur, we should clean up all the the related files
and folders.
* (Sorry I missed this earlier) Before importing the container, we should
check that the containerID doesn't already exist in this node
** ContainerSet should not have this containerID
I think this check should in general suffice to ensure we are not duplicating
the data. But since we might get an error even while cleaning up, it would be
good to add another check.
** Check that the location we are unpacking into does not already exist.
In case it does exist but the container is not part of containerSet, do we
override the data?
I am okay with addressing these either here or in HDDS-75.
+1 for patch v03 on the condition that these comments are addressed in HDDS-75.
> Support export and import of the KeyValueContainer
> --------------------------------------------------
>
> Key: HDDS-328
> URL: https://issues.apache.org/jira/browse/HDDS-328
> Project: Hadoop Distributed Data Store
> Issue Type: Improvement
> Components: Ozone Datanode
> Reporter: Elek, Marton
> Assignee: Elek, Marton
> Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-328.002.patch, HDDS-328.003.patch
>
>
> In HDDS-75 we pack the container data to an archive file, copy to other
> datanodes and create the container from the archive.
> As I wrote in the comment of HDDS-75 I propose to separate the patch to make
> it easier to review.
> In this patch we need to extend the existing Container interface with adding
> export/import methods to save the container data to one binary input/output
> stream.
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]