[ 
https://issues.apache.org/jira/browse/JCLOUDS-769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16097632#comment-16097632
 ] 

Veit Guna edited comment on JCLOUDS-769 at 7/23/17 1:46 PM:
------------------------------------------------------------

2 years later I still stumble across this :). Also found my related post back 
then:

https://www.mail-archive.com/user@jclouds.apache.org/msg01562.html

So forcing me to provide the content length in advance, I now have to stream 
the whole incoming data on disk first, before being able to upload it using 
jclouds.
This may be ok for files only some MB of size, but trying this with GBs is 
reallly a pain. It's not just I have to provide enough diskspace, but also 
costs twice the time.
So If a user uploads 4 GB to my service, I first have to store it completely on 
disk (to just get the size) and after finish (which took quite a time) I also 
spend time uploading it in a 2nd step (which could lead to a timeout on client 
side - e.g. by browser),

If there would be an auto-chunking performed by jclouds, only providing the 
size of the chunks to S3, that would help a lot of people, that are currently 
brewing their own workaround.
It would simply "stream-through" with no extra costs - instead of that 
temporary storage.

Is there anything planned in this regards :)?




was (Author: vguna):
2 years later I still stumble across this :). Also found my related post back 
then:

https://www.mail-archive.com/user@jclouds.apache.org/msg01562.html

So forcing me to provide the content length in advance, I now have to stream 
the whole incoming data on disk first, before being able to upload it using 
jclouds.
This may be ok for files only some MB of size, but trying this with GBs is 
reallly a pain. It's not just I have to provide enough diskspace, but also 
costs twice the time.
So If a user uploads 4 GB to my service, I first have to store it completely on 
disk (to just get the size) and after finish (which took quite a time) I also 
spend time uploading it in a 2nd step (which could lead to a timeout on client 
side - e.g. by browser),

If there would be an auto-chunking performed by jclouds, only providing the 
size of the chunks to S3, that would help a lot of people, that are currently 
brewing their own workaround.
It would simply "stream-through" with no extra costs - instead of that 
temporary storage.




> Upload blob from stream
> -----------------------
>
>                 Key: JCLOUDS-769
>                 URL: https://issues.apache.org/jira/browse/JCLOUDS-769
>             Project: jclouds
>          Issue Type: New Feature
>          Components: jclouds-blobstore
>    Affects Versions: 1.8.1
>            Reporter: Akos Hajnal
>              Labels: multipart, s3
>
> Dear Developers,
> It was not easy, but using S3 API, it was possible to upload a large blob 
> from stream - without knowing its size in advance (and storing all the data 
> locally). I found solutions using jclouds' aws-s3 specific API (some async 
> interface), but I really miss this feature from jclouds' general API.
> My dream is to have a method like:
> blob.getOutputStream() into which I can write as many data as I want, 
> which pushes data to the storage simultaneously until I close the stream.
> (When I used S3, I created a wrapper class extending OutputStream, which 
> initiates multipart upload, buffers data written to the output stream, writes 
> a part when the buffer is full, and finalizes multipart upload on stream 
> close.) 
> I don't know it is possible for all providers, but I really miss it...
> Thank you,
> Akos Hajnal



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to