[
https://issues.apache.org/jira/browse/JCLOUDS-769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14378753#comment-14378753
]
Andrew Gaul commented on JCLOUDS-769:
-------------------------------------
[~knl] You might want to push this for review as a strawman -- it might
motivate progress on this important feature or at least give others a local
workaround. If we expose the part size and document the per-part buffering
does this help users? Personally I am inclined to expose portable
initiateMPU/uploadPart/completeMPU/abortMPU operations and let people compose
on top of those, although this suffers from explicit complexity.
> Upload blob from stream
> -----------------------
>
> Key: JCLOUDS-769
> URL: https://issues.apache.org/jira/browse/JCLOUDS-769
> Project: jclouds
> Issue Type: New Feature
> Components: jclouds-blobstore
> Affects Versions: 1.8.1
> Reporter: Akos Hajnal
> Labels: multipart, s3
>
> Dear Developers,
> It was not easy, but using S3 API, it was possible to upload a large blob
> from stream - without knowing its size in advance (and storing all the data
> locally). I found solutions using jclouds' aws-s3 specific API (some async
> interface), but I really miss this feature from jclouds' general API.
> My dream is to have a method like:
> blob.getOutputStream() into which I can write as many data as I want,
> which pushes data to the storage simultaneously until I close the stream.
> (When I used S3, I created a wrapper class extending OutputStream, which
> initiates multipart upload, buffers data written to the output stream, writes
> a part when the buffer is full, and finalizes multipart upload on stream
> close.)
> I don't know it is possible for all providers, but I really miss it...
> Thank you,
> Akos Hajnal
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)