[
https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16603489#comment-16603489
]
Ewan Higgs commented on HDFS-13713:
-----------------------------------
[~goiri], yes there is a HDFS implementation. See
{{org.apache.hadoop.fs.FileSystemMultipartUploader}}. There is no example yet
of a use since this is a primitive that will be used by forthcoming work
(HDFS-12090). But it has wide applicability (e.g. DistCP) so it was submitted
to trunk and not on the HDFS-12090 branch.
{quote}Does it make sense to add a pointer to the S3 implementation as an
example?{quote}
Maybe, but would pointing to the S3 implementation preempt the possibilities of
having a wasb and/or adl implementation?
> Add specification of Multipart Upload API to FS specification, with contract
> tests
> ----------------------------------------------------------------------------------
>
> Key: HDFS-13713
> URL: https://issues.apache.org/jira/browse/HDFS-13713
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: fs, test
> Affects Versions: 3.2.0
> Reporter: Steve Loughran
> Assignee: Ewan Higgs
> Priority: Blocker
> Attachments: HDFS-13713.001.patch, HDFS-13713.002.patch,
> multipartuploader.md
>
>
> There's nothing in the FS spec covering the new API. Add it in a new .md file
> * add FS model with the notion of a function mapping (uploadID -> Upload),
> the operations (list, commit, abort). The [TLA+
> mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
> of HADOOP-13786 shows how to do this.
> * Contract tests of not just the successful path, but all the invalid ones.
> * implementations of the contract tests of all FSs which support the new API.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]