Looks fine, I might switch to composition, but that's just a style nit. Ex.

PutOnCloseOutputStream extends FilterOutputString {
  private final okio.Buffer buffer = new okio.Buffer();
  ...

  PutOnCloseOutputStream(…){
    super(buffer.outputStream());
    …
  }

  @Override public void close() throws IOException {
    // put buffer.inputStream() with length buffer.size()

  }

}

On Tue, Aug 5, 2014 at 4:27 PM, Steve Kingsland
<steve.kingsl...@opower.com> wrote:
> This wasn't terribly complicated to handle using a ByteArrayOutputStream,
> once I fixed the callers to not closeQuietly()...
>
> Here's the calling code, that has to return an OutputStream:
>
>     public OutputStream getOutputStream(String containerName, String
> resourceName) throws IOException {
>         return new
> JcloudsObjectWritingByteArrayOutputStream(this.blobStoreContext.getBlobStore(),
> containerName, resourceName);
>     }
>
> And here's what JcloudsObjectWritingByteArrayOutputStream looks like (it's a
> bit long, so I put it in a gist):
> https://gist.github.com/skingsland/d2341cd52cd36c6cbb6f
>
> It's working ok with filesystem and in-memory object stores, but I'm running
> into some (apparently-unrelated) errors with the particular object store I'm
> trying to use (Ceph via S3 API). I'll save those for another email...
>
> I'd love to hear feedback on this approach. And thanks everyone for your
> help!
>
>
>
> Steve Kingsland
>
>
> Senior Software Engineer
>
> Opower
>
>
> We’re hiring! See jobs here
>
>
>
> On Tue, Aug 5, 2014 at 5:52 PM, Adrian Cole <adrian.f.c...@gmail.com> wrote:
>>
>> jclouds currently doesn't have a direct path to the outputstream (or
>> channel), and even if it did, things mentioned by gaul would still be
>> true (ex. may need content-length up front).
>>
>> jclouds doesn't have a direct path to becoming netty, so I wouldn't
>> get too excited about full-bore async. Chunking, multipart, etc. over
>> streams are very possible, though.
>>
>> Personally, I'd recommend using something like okio buffer (or some
>> other buffer) and making that easier to work with (if it isn't
>> already). https://github.com/square/okio
>>
>> Hope this helps,
>> -A
>>
>> On Tue, Aug 5, 2014 at 2:33 PM, Zack Shoylev <zack.shoy...@rackspace.com>
>> wrote:
>> > With buffered streams, for example, close() causes buffers to be flushed
>> > (which is technically what you are doing).
>> > So yes, you can get some serious exceptions when closing.
>> >
>> > ________________________________
>> > From: Steve Kingsland [steve.kingsl...@opower.com]
>> > Sent: Tuesday, August 05, 2014 9:06 AM
>> >
>> > To: user@jclouds.apache.org
>> > Subject: Re: How to write a Blob using an OutputStream?
>> >
>> > org.apache.commons.io.output.ByteArrayOutputStream sounds like a nice
>> > improvement over java.io.ByteArrayOutputStream (at least for my
>> > purposes),
>> > thanks Zack!
>> >
>> > The problem I'm running into is actually with the caller's
>> > Closeables.closeQuietly(documentOutputStream); call. That catches any
>> > IOException that's thrown from close() and logs it, instead of throwing
>> > it.
>> > That won't work for me, since I won't know if there was an error writing
>> > to
>> > the blob store until close() is called on my OutputStream. I can of
>> > course
>> > change the caller to use different error-handling for closing the
>> > stream,
>> > but it makes me wonder if using the close() method to upload the blob is
>> > the
>> > right approach. If you're given an OutputStream to write to, you'd
>> > expect
>> > the real errors to come from the write() methods, and not the close()
>> > method, right?
>> >
>> >
>> > Steve Kingsland
>> >
>> >
>> > Senior Software Engineer
>> >
>> > Opower
>> >
>> >
>> > We’re hiring! See jobs here
>> >
>> >
>> >
>> > On Tue, Aug 5, 2014 at 7:21 AM, Zack Shoylev
>> > <zack.shoy...@rackspace.com>
>> > wrote:
>> >>
>> >> Your code seems fine. I have used
>> >>
>> >> http://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/output/ByteArrayOutputStream.html
>> >> in the past to convert between stream types, but it seems like it
>> >> doesn't
>> >> match your case very well.
>> >>
>> >> Note you might have to do writeBytesToBlob() before super.close(), but
>> >> you
>> >> can test that.
>> >>
>> >> Let us know how it turns out!
>> >> ________________________________
>> >> From: Steve Kingsland [steve.kingsl...@opower.com]
>> >> Sent: Monday, August 04, 2014 9:22 PM
>> >> To: user@jclouds.apache.org
>> >> Subject: Re: How to write a Blob using an OutputStream?
>> >>
>> >> OK, then it appears that my calling code (which would be difficult and
>> >> risky to change) is incompatible with jclouds' BlobStore API: my caller
>> >> wants to obtain an OutputStream for writing to the blob store, and
>> >> jclouds
>> >> wants to obtain an InputStream for reading the blob's content that
>> >> should be
>> >> written. Therefore, my only solution is to buffer the blob data, either
>> >> in
>> >> memory or on disk, before uploading it to the blob store.
>> >>
>> >> Given that the documents I'm trying to write to the blob store will
>> >> generally be small (1KB to 1MB), I'm going with a simple approach, for
>> >> providing my caller with an OutputStream that they can use to write the
>> >> blob's payload:
>> >>
>> >> class BlobWritingByteArrayOutputStream extends
>> >> java.io.ByteArrayOutputStream {
>> >>
>> >>     // these are all set in the constructor
>> >>     private BlobStore blobStore;
>> >>     private String containerName, blobName;
>> >>
>> >>     // the client will have to call this when he's finished writing, so
>> >> this is our chance to upload the blob,
>> >>     // now that we have the full payload in memory
>> >>     @Override
>> >>     public void close() throws IOException {
>> >>         super.close();
>> >>
>> >>         writeBytesToBlob();
>> >>     }
>> >>
>> >>     private void writeBytesToBlob() {
>> >>         byte[] payload = toByteArray();
>> >>
>> >>         Blob blob = blobStore.blobBuilder(blobName)
>> >>                              .payload(payload)
>> >>                              .contentLength(payload.size)
>> >>                              .build();
>> >>         blobStore.putBlob(containerName, blob);
>> >>     }
>> >> }
>> >>
>> >> Aside from the weird inversion of control going on and the requirement
>> >> that close() be called, I think something simple like this - to buffer
>> >> the
>> >> bytes being written before uploading them to the blob store - might
>> >> work for
>> >> me.
>> >>
>> >> Thoughts?
>> >>
>> >>
>> >>
>> >>
>> >> Steve Kingsland
>> >>
>> >>
>> >> Senior Software Engineer
>> >>
>> >> Opower
>> >>
>> >>
>> >> We’re hiring! See jobs here
>> >>
>> >>
>> >>
>> >> On Mon, Aug 4, 2014 at 9:05 PM, Andrew Gaul <g...@apache.org> wrote:
>> >>>
>> >>> On Mon, Aug 04, 2014 at 08:46:37PM -0400, Steve Kingsland wrote:
>> >>> > Here is Kevin's example using PipedInputStream and
>> >>> > PipedOutputStream:
>> >>> > https://groups.google.com/d/msg/jclouds/F2pCt9i7TSg/AUF4AqOO0TMJ
>> >>> >
>> >>> > I don't have the need to use different threads, though, so instead
>> >>> > I'd
>> >>> > do
>> >>> > something like this?
>> >>>
>> >>> This will not work; putBlob blocks until the operation completes.
>> >>> Further you must use PipedInputStream/PipedOutputStream with separate
>> >>> threads to avoid deadlock, as its Javadoc states:
>> >>>
>> >>> http://docs.oracle.com/javase/7/docs/api/java/io/PipedInputStream.html
>> >>>
>> >>> Unfortunately jclouds has poor support for asynchronous operations and
>> >>> you can really only fake the desired behavior with various
>> >>> InputStream.
>> >>> I strongly recommend trying to cast your solution into some kind of
>> >>> ByteSource or InputStream.
>> >>>
>> >>> > And then when close() or flush() is called on the returned
>> >>> > OutputStream,
>> >>> > the blob is uploaded like magic? Is it OK that I'm not setting the
>> >>> > content
>> >>> > length?
>> >>>
>> >>> Some blobstores, specifically Amazon S3, require a content length,
>> >>> while
>> >>> others such as OpenStack Swift do not.
>> >>>
>> >>> --
>> >>> Andrew Gaul
>> >>> http://gaul.org/
>> >>
>> >>
>> >
>
>

Reply via email to