> Hi Oleg, > > Thanks for your reply! > > I started writing a custom ChunkedOutputStream and it appears that in the > current implementation calling flush() causes it to write out whatever it > has in its buffer as the next chunk. Which means, the next call to > write() will start the chunk after that. So I backed up and wrote an > HttpEntity based on InputStreamEntity that takes two input streams, one > for the request and one for the data. Its writeTo() method writes out the > request stream, flush()es, then writes out the data stream. This seems to > put a chunk boundary where I want it with only having to subclass the > entity. But... I foresee some cases where it may write a chunk naturally, > buffer the last little bit of the first stream, then write that tiny chunk > when I tell it to flush. (Which to fix, I think I'd have to do like you > said and write a more intelligent ChunkedOutputStream and plumb it > through.) Other than the behavior in that case being kind of suboptimal, > is there anything wrong with that? Is there any other problem with doing > it this way?
John, This all sounds like quite a nasty hack to me. I am also not sure whether it is a good idea to rely on a particular composition of content chunks. What is wrong with just reading, say, the first 1K of the incoming entity, parsing it and making a decision whether or not to proceed with reading the remaining content based on information contained in the entity head? Oleg --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
