So, I guess what you are saying is that instead of ConsumeOnCloseInputStream,
we use AbortOnCloseInputStream, since reading the stream isn't important
noting current developments in apachehc.

If so, go ahead and open a jira on it.  Also, let us know, if you want to
take a swing at it or not.

https://issues.apache.org/jira/browse/JCLOUDS

Cheers,
-A


On Wed, May 15, 2013 at 9:19 AM, Leandro Gandinetti
<[email protected]>wrote:

> Hi Adrian,
>
> Im not certain i fully understand the apachehc issue. There is no abort
> method on implementation?
>
> Sorry about that, but I could miss something on english text reading (Im
> from Brazil).
>
> Anyway, if I get it right, we could create an abort on Payload so I call it
> before closing the stream right? I think this would solve my problem since
> I could call abort on failure cases.
>
> It would be usefull that closing stream it calls abort automatically if
> theres no bytes read from stream.
>
> Sorry again if I talking nonsense, and thanks a lot!
>
>
> 2013/5/15 Adrian Cole <[email protected]>
>
> > This behavior was originally in place for the rationale noted in apachehc
> > (to ensure the underlying streams are closed).
> >
> >
> >
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/fundamentals.html(section
> > 1.1.5)
> >
> > When we emulated this, there was no "abort" op in apachehc, and so we
> > didn't copy that pattern.  I suspect that we could add abort to Payload
> and
> > somehow ensure that consumeOnClose is called unless someone called abort.
> >
> > wdyt?
> >
> >
> > On Wed, May 15, 2013 at 7:48 AM, Leandro Gandinetti
> > <[email protected]>wrote:
> >
> > > Hi,
> > >
> > > Anyone know if theres a way to avoid ConsumeOnCloseInputStream to
> flushes
> > > all content on blob's InputStream close()?
> > >
> > > I having trouble when getting a InputStream from a blob that I need to
> > > close (on my program error) without read any data.
> > >
> > > The jclouds implementation downloads all blob's content to nowhere
> before
> > > close it and takes unnecessary time and bandwidth to do it specially
> with
> > > large files.
> > >
> >
>

Reply via email to