Ewan Higgs created HDFS-13186:

             Summary: Multipart Multinode uploader API + Implementations
                 Key: HDFS-13186
                 URL: https://issues.apache.org/jira/browse/HDFS-13186
             Project: Hadoop HDFS
          Issue Type: Sub-task
            Reporter: Ewan Higgs

To write files in parallel to an external storage system as in HDFS-12090, 
there are two approaches:
 # Naive approach: use a single datanode per file that copies blocks locally as 
it streams data to the external service. This requires a copy for each block 
inside the HDFS system and then a copy for the block to be sent to the external 
 # Better approach: Single point (e.g. Namenode or SPS style external client) 
and Datanodes coordinate in a multipart - multinode upload.

This system needs to work with multiple back ends and needs to coordinate 
across the network. So we propose an API that resembles the following:
public UploadHandle multipartInit(Path filePath) throws IOException;

public PartHandle multipartPutPart(InputStream inputStream,
    int partNumber, UploadHandle uploadId) throws IOException;

public void multipartComplete(Path filePath,
    List<Pair<Integer, PartHandle>> handles, 
    UploadHandle multipartUploadId) throws IOException;{code}
Here, UploadHandle and PartHandle are opaque handlers in the vein of PathHandle 
so they can be serialized and deserialized in hadoop-hdfs project without 
knowledge of how to deserialize e.g. S3A's version of a UpoadHandle and 

In an object store such as S3A, the implementation is straight forward. In the 
case of writing multipart/multinode to HDFS, we can write each block as a file 
part. The complete call will perform a concat on the blocks.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to