[ 
https://issues.apache.org/jira/browse/HDFS-11686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16374103#comment-16374103
 ] 

Elek, Marton commented on HDFS-11686:
-------------------------------------

Uploaded the patch with working conainer replication:

1. It's a simplified soulution based on pull model, but everything is behind 
interfaces. We can improve/implement more sophisticated implementateion later. 
(For example: no preparation phase but we stream the container.tar.gz 
on-demand, and we download it only from datanode yet. Both implementation could 
be improved with the existing interfaces).

2. Datanode contains a new HDLS rest webe server (DatanodeID is extended). I 
prefer to separate the HDLS and Ozone endpoints. We can't use the existing RPC 
endpoints as the XCeiverClient communicates only with the leader. We need a 
private channel where we can connect to any datanode in the pipline (to 
download missing containers in paralell)
 
3. SCM should ask the copy from the destination datanode and sending the source 
datanodes (pipline).



> Ozone: Support CopyContainer
> ----------------------------
>
>                 Key: HDFS-11686
>                 URL: https://issues.apache.org/jira/browse/HDFS-11686
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ozone
>            Reporter: Anu Engineer
>            Assignee: Elek, Marton
>            Priority: Major
>              Labels: OzonePostMerge
>         Attachments: HDFS-11686-HDFS-7240.001.patch, 
> HDFS-11686-HDFS-7240.wip.patch
>
>
> Once a container is closed we need to copy the container to the correct pool 
> or re-encode the container to use erasure coding. The copyContainer allows 
> users to get the container as a tarball from the remote machine.
> The copyContainer is a basic step to move the raw container data from one 
> datanode to an other node. It could be used by higher level components such 
> like the scm which ensures that the replication rules are satisfied.
> The CopyContainer by default works in pull model: the destination datanode 
> could read the raw data from one or more source datanode where the container 
> exists.
> The source provides a binary representation of the container over a common 
> interface which has two method:
>  # prepare(containerName)
>  # copyData(String containerName, OutputStream destination)
> Prepare phase is called right after the closing event and the implementation 
> could prepare for the copy by precreate a compressed tar file from the 
> container data. As a first step we can provide a simple implementation which 
> creates the tar files on demand.
> The destination datanode should retry the copy if the container in the source 
> node not yet prepared.
> The raw container data is provided over HTTP. The HTTP endpoint should be 
> separated from the ObjectStore  REST API (similar to the distinctions between 
> HDFS-7240 and HDFS-13074) 
> Long-term the HTTP endpoint should support Http-Range requests: One container 
> could be copied from multiple source by the destination. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to