[
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16449420#comment-16449420
]
Chris Douglas commented on HDFS-13186:
--------------------------------------
I like this design. The {{MultiPartUploader}} avoids adding new hooks to
{{FileSystem}}, without losing generality or pluggability.
High level:
* The current impl doesn't define a default for {{FileSystem}} implementations,
which could be a serial copy. Instead, it throws an exception. Utilities (like
{{FsShell}} or YARN) need to implement some boilerplate for both paths, rather
than using a single path that falls back to a serial upload.
* Some implementations might benefit from an explicit
{{MultipartUploader::abort}}, which may clean up the partial upload. Clearly it
can't be guaranteed, but we'd like the property that an {{UploadHandle}}
persisted to a WAL could be used for cleanup.
* The {{PartHandle}} could retain its ID, rather than providing a
{{Pair<Integer,PartHandle>}} to {{commit}}. This might make repartitioning
difficult i.e., splitting a slow {{PartHandle}}, but implementations could
implement custom handling if that's important. It would be sufficient for the
{{PartHandle}} to be {{Comparable}}, though equality should be treated either
as a duplicate or an error at {{complete}} by the {{MultipartUploader}}.
* Does the {{UploadHandle}} init ever vary, depending on the src? Intra-FS
copies?
* Right now, the utility doesn't offer an API to partition a file, or to create
(bounded) {{InputStream}} args to {{putPart}}.
Minor:
{{MultipartUploader}}
* IIRC {{ClassUtil::findContainingJar}} is expensive; guard using {{if
(LOG.isDebugEnabled)}}
* Might be worth warning if two {{MultipartUploader}} impls report the same
scheme
* The typical {{ServiceLoader}} pattern returns a factory object that produces
instances, rather than matching the class and producing instances by
reflection. This way, the factory instance can make some addtional checks
and/or handle scheme collisions by chaining. It also avoids the {{initialize}}
pattern, since the factory can invoke the cstr. The lookup is slower (scan of
loaded uploaders, vs map lookup), but for a small number of instances.
* Do {{putPart}} and {{complete}} need the {{filePath}} parameter?
* Rename {{MultipartUploader}} to {{Uploader}}?
{{BBPartHandle}}
* While it's unlikely to be an issue, direct {{ByteBuffer}}s don't support
{{array()}}. Is this to support {{Serializable}}}?
> [PROVIDED Phase 2] Multipart Multinode uploader API + Implementations
> ---------------------------------------------------------------------
>
> Key: HDFS-13186
> URL: https://issues.apache.org/jira/browse/HDFS-13186
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Ewan Higgs
> Assignee: Ewan Higgs
> Priority: Major
> Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch,
> HDFS-13186.003.patch
>
>
> To write files in parallel to an external storage system as in HDFS-12090,
> there are two approaches:
> # Naive approach: use a single datanode per file that copies blocks locally
> as it streams data to the external service. This requires a copy for each
> block inside the HDFS system and then a copy for the block to be sent to the
> external system.
> # Better approach: Single point (e.g. Namenode or SPS style external client)
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
> int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
> List<Pair<Integer, PartHandle>> handles,
> UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In
> the case of writing multipart/multinode to HDFS, we can write each block as a
> file part. The complete call will perform a concat on the blocks.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]