[
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Elek, Marton updated HDDS-75:
-----------------------------
Attachment: HDDS-75.014.patch
> Ozone: Support CopyContainer
> ----------------------------
>
> Key: HDDS-75
> URL: https://issues.apache.org/jira/browse/HDDS-75
> Project: Hadoop Distributed Data Store
> Issue Type: Improvement
> Components: Ozone Datanode
> Reporter: Anu Engineer
> Assignee: Elek, Marton
> Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-75.005.patch, HDDS-75.006.patch, HDDS-75.007.patch,
> HDDS-75.009.patch, HDDS-75.010.patch, HDDS-75.011.patch, HDDS-75.012.patch,
> HDDS-75.013.patch, HDDS-75.014.patch, HDFS-11686-HDFS-7240.001.patch,
> HDFS-11686-HDFS-7240.002.patch, HDFS-11686-HDFS-7240.003.patch,
> HDFS-11686-HDFS-7240.004.patch
>
>
> Once a container is closed we need to copy the container to the correct pool
> or re-encode the container to use erasure coding. The copyContainer allows
> users to get the container as a tarball from the remote machine.
> The copyContainer is a basic step to move the raw container data from one
> datanode to an other node. It could be used by higher level components such
> like the scm which ensures that the replication rules are satisfied.
> The CopyContainer by default works in pull model: the destination datanode
> could read the raw data from one or more source datanode where the container
> exists.
> The source provides a binary representation of the container over a common
> interface which has two method:
> # prepare(containerName)
> # copyData(String containerName, OutputStream destination)
> Prepare phase is called right after the closing event and the implementation
> could prepare for the copy by precreate a compressed tar file from the
> container data. As a first step we can provide a simple implementation which
> creates the tar files on demand.
> The destination datanode should retry the copy if the container in the source
> node not yet prepared.
> The raw container data is provided over HTTP. The HTTP endpoint should be
> separated from the ObjectStore REST API (similar to the distinctions between
> HDFS-7240 and HDFS-13074)
> Long-term the HTTP endpoint should support Http-Range requests: One container
> could be copied from multiple source by the destination.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]