Thank you so much for the answer.
I have chosen to maintain the non-blocking I/O model to guarantee the
extendibility. However, I followed your suggestion to reduce the
complexity: the custom HttpAsyncResponseProducer wasn't necessary and I
have taken inspiration from one solution in the tutorial (
http://hc.apache.org/httpcomponents-core-ga/tutorial/html/nio.html#d5e904)
with the addition of mutex.


Francesco Corazza


On Mon, Jul 2, 2012 at 8:21 PM, Oleg Kalnichevski <[email protected]> wrote:

> On Mon, 2012-07-02 at 18:33 +0200, Francesco Corazza wrote:
> > Hi Oleg,
> > thanks for your reply.
> >
> > Yes, you are understanding it well.
> > The client has its own worker that waits the response.
> > At this point I don't need to stream data, but maybe in the future I'll
> have to deal with it if my approach doesn't scale up in constrained
> networks (the context of this custom protocol).
> >
> >
>
> In this case you might as well consider using a framework based on the
> classic (blocking) I/O, as no matter what you will end up with one
> worker thread per request / response exchange.
>
> If you still want to continue using a non-blocking I/O model you could
> consider buffering the entire response message in memory in order to
> reduce the complexity of your proxy (at the expense of a larger memory
> footprint).
>
> Hope this helps
>
> Oleg
>
> >
> > Francesco Corazza
> >
> >
> >
> > On Jul 2, 2012, at 6:08 PM, Oleg Kalnichevski wrote:
> >
> > > On Mon, 2012-07-02 at 15:18 +0200, Francesco Corazza wrote:
> > >> Hi,
> > >> I'm doing a cross protocol proxy to translate http to a custom
> protocol
> > >> (udp based). Therefore my architecture is composed by the http async
> server
> > >> on one side and the custom client on the other side. I have created a
> > >> custom HttpAsyncRequestHandler to manage the server's behavior.
> > >> To produce the request there are no problems because I haven't to
> manage
> > >> anything asynchronously; using the BasicAsyncRequestConsumer is
> enough for
> > >> my purposes.
> > >>
> > >> The problem is the opposite direction. To produce the http response
> the
> > >> proxy should wait the "completion" of the client process to get
> datagram
> > >> and translate it to a http response. In other words, the client is the
> > >> producer and the server is the consumer when creating an http
> response.
> > >>
> > >> My solution (actually a dirty workaround) was defining a subclass of
> > >> HttpAsyncResponseProducer to manage this asynchronous behavior. Each
> > >> instance of the custom HttpAsyncResponseProducer has a mutex to
> synchronize
> > >> this consumer thread (done in the method generateResponse) and to
> continue
> > >> iff the producer thread (client) has already dispatched the response.
> From
> > >> the client side, I created a map to dispatch the correct response to
> the
> > >> corresponding ResponseProducer instance.
> > >>
> > >> I don't like my solution because of the overhead created by
> > >> one-thread-per-request approach. Is there a smarter way to do this job
> > >> (i.e., exploiting some API that I have not recognized)?
> > >>
> > >>
> > >> Thank you all.
> > >> Best regards,
> > >>
> > >>
> > >> Francesco Corazza
> > >
> > > Hi Francesco
> > >
> > > Do I understand it correctly that the outgoing service with the custom
> > > protocol is effectively synchronous and requires a dedicated worker
> > > thread to execute? Do you need to stream response content or can you
> > > afford buffering the entire message in memory?
> > >
> > > Oleg
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: [email protected]
> > > For additional commands, e-mail: [email protected]
> > >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [email protected]
> > For additional commands, e-mail: [email protected]
> >
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>

Reply via email to