org.apache.commons.io.output.ByteArrayOutputStream sounds like a nice
improvement over java.io.ByteArrayOutputStream (at least for my purposes),
thanks Zack!

The problem I'm running into is actually with the caller's
Closeables.closeQuietly(documentOutputStream); call. That catches any
IOException that's thrown from close() and logs it, instead of throwing it.
That won't work for me, since I won't know if there was an error writing to
the blob store until close() is called on my OutputStream. I can of course
change the caller to use different error-handling for closing the stream,
but it makes me wonder if using the close() method to upload the blob is
the right approach. If you're given an OutputStream to write to, you'd
expect the *real* errors to come from the write() methods, and not the
close() method, right?


*Steve Kingsland*

Senior Software Engineer

*Opower * <http://www.opower.com/>


*We’re hiring! See jobs here <http://www.opower.com/careers> *


On Tue, Aug 5, 2014 at 7:21 AM, Zack Shoylev <[email protected]>
wrote:

>  Your code seems fine. I have used
> http://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/output/ByteArrayOutputStream.html
>  in
> the past to convert between stream types, but it seems like it doesn't
> match your case very well.
>
>  Note you might have to do writeBytesToBlob() before super.close(), but
> you can test that.
>
>  Let us know how it turns out!
>  ------------------------------
> *From:* Steve Kingsland [[email protected]]
> *Sent:* Monday, August 04, 2014 9:22 PM
> *To:* [email protected]
> *Subject:* Re: How to write a Blob using an OutputStream?
>
>   OK, then it appears that my calling code (which would be difficult and
> risky to change) is incompatible with jclouds' BlobStore API: my caller
> wants to obtain an OutputStream for writing to the blob store, and jclouds
> wants to obtain an InputStream for reading the blob's content that should
> be written. Therefore, my only solution is to buffer the blob data, either
> in memory or on disk, before uploading it to the blob store.
>
>  Given that the documents I'm trying to write to the blob store will
> generally be small (1KB to 1MB), I'm going with a simple approach, for
> providing my caller with an OutputStream that they can use to write the
> blob's payload:
>
>  class BlobWritingByteArrayOutputStream extends
> java.io.ByteArrayOutputStream {
>
>      // these are all set in the constructor
>      private BlobStore blobStore;
>      private String containerName, blobName;
>
>      // the client will have to call this when he's finished writing, so
> this is our chance to upload the blob,
>     // now that we have the full payload in memory
>     @Override
>     public void close() throws IOException {
>         super.close();
>
>          writeBytesToBlob();
>     }
>
>      private void writeBytesToBlob() {
>         byte[] payload = toByteArray();
>
>          Blob blob = blobStore.blobBuilder(blobName)
>                              .payload(payload)
>                              .contentLength(payload.size)
>                              .build();
>         blobStore.putBlob(containerName, blob);
>      }
> }
>
>  Aside from the weird inversion of control going on and the requirement
> that close() be called, I think something simple like this - to buffer
> the bytes being written before uploading them to the blob store - might
> work for me.
>
>  Thoughts?
>
>
>
>
>   *Steve Kingsland*
>
> Senior Software Engineer
>
> * Opower * <http://www.opower.com/>
>
>
> * We’re hiring! See jobs here <http://www.opower.com/careers> *
>
>
> On Mon, Aug 4, 2014 at 9:05 PM, Andrew Gaul <[email protected]> wrote:
>
>> On Mon, Aug 04, 2014 at 08:46:37PM -0400, Steve Kingsland wrote:
>> > Here is Kevin's example using PipedInputStream and PipedOutputStream:
>> > https://groups.google.com/d/msg/jclouds/F2pCt9i7TSg/AUF4AqOO0TMJ
>> >
>> > I don't have the need to use different threads, though, so instead I'd
>> do
>> > something like this?
>>
>>  This will not work; putBlob blocks until the operation completes.
>> Further you must use PipedInputStream/PipedOutputStream with separate
>> threads to avoid deadlock, as its Javadoc states:
>>
>> http://docs.oracle.com/javase/7/docs/api/java/io/PipedInputStream.html
>>
>> Unfortunately jclouds has poor support for asynchronous operations and
>> you can really only fake the desired behavior with various InputStream.
>> I strongly recommend trying to cast your solution into some kind of
>> ByteSource or InputStream.
>>
>> > And then when close() or flush() is called on the returned OutputStream,
>> > the blob is uploaded like magic? Is it OK that I'm not setting the
>> content
>> > length?
>>
>>  Some blobstores, specifically Amazon S3, require a content length, while
>> others such as OpenStack Swift do not.
>>
>> --
>> Andrew Gaul
>> http://gaul.org/
>>
>
>

Reply via email to