[
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16529894#comment-16529894
]
Steve Loughran commented on HDFS-13186:
---------------------------------------
you know, I've never actually looked at the code behind this.
Ewan, if you are doing stuff in the hadoop FS APIs, I'll be expecting a
followup "add this to the Hadoop FS API", with some minimal contract tests to
verify that all FS's implementing this do have the correct semantics (not
visible until completed, when completed you get the blocks you uploaded in the
right order, etc. Sorry.
Also doesn't work with S3Guard, because it's calling
{{S3AFileSystem..getAmazonS3Client()}}; S3Guard won't notice when a new object
is created. I know that's not a direct concern of yours, but you are, I'm
afraid, expected to at least work with it, so that those new tests you are
going to have to add will work with S3Guard in auth mode.
thanks
> [PROVIDED Phase 2] Multipart Uploader API
> -----------------------------------------
>
> Key: HDFS-13186
> URL: https://issues.apache.org/jira/browse/HDFS-13186
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Ewan Higgs
> Assignee: Ewan Higgs
> Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch,
> HDFS-13186.003.patch, HDFS-13186.004.patch, HDFS-13186.005.patch,
> HDFS-13186.006.patch, HDFS-13186.007.patch, HDFS-13186.008.patch,
> HDFS-13186.009.patch, HDFS-13186.010.patch
>
>
> To write files in parallel to an external storage system as in HDFS-12090,
> there are two approaches:
> # Naive approach: use a single datanode per file that copies blocks locally
> as it streams data to the external service. This requires a copy for each
> block inside the HDFS system and then a copy for the block to be sent to the
> external system.
> # Better approach: Single point (e.g. Namenode or SPS style external client)
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
> int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
> List<Pair<Integer, PartHandle>> handles,
> UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In
> the case of writing multipart/multinode to HDFS, we can write each block as a
> file part. The complete call will perform a concat on the blocks.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]