Steve Loughran commented on HDFS-13186:

I like the broad applicability of this.

* I worry about how to do ser/deser securely; because I don't want to use 
things like Java serialization to persist the intermediate state
* the multipart put should also allow caller to supply a File and range refs; 
makes it easier to upload buffered data, especially when client libs (like AWS 
SDK) are better at recovery of POST failures when it knows its reading off a 

> [WRITE] Multipart Multinode uploader API + Implementations
> ----------------------------------------------------------
>                 Key: HDFS-13186
>                 URL: https://issues.apache.org/jira/browse/HDFS-13186
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Ewan Higgs
>            Priority: Major
> To write files in parallel to an external storage system as in HDFS-12090, 
> there are two approaches:
>  # Naive approach: use a single datanode per file that copies blocks locally 
> as it streams data to the external service. This requires a copy for each 
> block inside the HDFS system and then a copy for the block to be sent to the 
> external system.
>  # Better approach: Single point (e.g. Namenode or SPS style external client) 
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate 
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
>     int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
>     List<Pair<Integer, PartHandle>> handles, 
>     UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of 
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project 
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle 
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In 
> the case of writing multipart/multinode to HDFS, we can write each block as a 
> file part. The complete call will perform a concat on the blocks.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to