@Vitalii,

I'm not sure.  It sounds like he wants to receive a call and then process
it but I'm not positive.  That's why I was asking for a clearer
definition.  Here's one of the initial statements.

"In our use case we are doing lot of crunching, DB and external REST
service calls? There is a limit on external REST service calls we can
make. I can restrict the call to external services using a thread
pool. *But I was thinking if it is possible to limit when receiving the*


*request, so that we can fail fast rather than limit while making
theexternal call. If request is crossing the limit sending error to the**caller
is fine.*"

On Mon, Oct 3, 2016 at 10:32 PM, Vitalii Tymchyshyn <[email protected]> wrote:

> I am not sure that Debraj was talking about incoming calls. And I was also
> looking for a way to limit number of concurrent exchanges being sent to
> given endpoint.
> In the Async scenario even thread pool can't help because one can make
> unlimited number of exchanges with one thread.
> And Throttler does not account for concurrent request amount, so it can't
> be used to limit concurrency level. I am actually thinking on extending
> Thtottler.
> To be specific, my usecase is batch processing where I need to make some
> web service calls with Netty. Currently without the limitation it can open
> up to few hundred concurrent sockets that unnesessary overloads the server.
> I'd like to set a limit of e.g. 20 concurrent calls with others waiting
> (similar to database connection pool).
> Netty4 component has limits to set, but it starts to fail when limit is
> reached instead of waiting. It would be very useful to have a generic
> module to help in such cases.
>
> Best regards, Vitalii Tymchyshyn
>
>
> Нд, 2 жовт. 2016 11:27 користувач Brad Johnson <
> [email protected]>
> пише:
>
> > Ah, so you aren't really concerned about the incoming calls, per se, it's
> > the number of outgoing calls.  And to limit that you want to limit the
> > incoming calls?  Are the incoming calls sending data in that can be
> > processed asynchronously or are they returning chunks of data to the
> > caller?
> >
> > On Sat, Oct 1, 2016 at 2:45 PM, Debraj Manna <[email protected]>
> > wrote:
> >
> > > Thanks Brad for replying.
> > >
> > > In our use case we are doing lot of crunching, DB and external REST
> > > service calls? There is a limit on external REST service calls we can
> > > make. I can restrict the call to external services using a thread
> > > pool. But I was thinking if it is possible to limit when receiving the
> > > request, so that we can fail fast rather than limit while making the
> > > external call. If request is crossing the limit sending error to the
> > > caller is fine.
> > >
> > >
> > >
> > > On 10/1/16, Brad Johnson <[email protected]> wrote:
> > > > The first question I'd have is "are you sure you have a problem with
> > the
> > > > number of incoming requests?"  One of the biggest problems I find in
> > the
> > > > field is premature optimization. If you have a fairly good
> > > characterization
> > > > of the problem, the number of requests anticipated, the length of
> time
> > to
> > > > process the incoming request, etc. you can set up JMeter to stress
> test
> > > > your application.  That will let you change configuration options in
> > > Camel
> > > > and see if the response is more in line with what you are expecting.
> > > >
> > > > What exactly are you trying to accomplish by limiting concurrent
> > > requests?
> > > > What do you want to happen if there are too many requests? Are these
> > > > request/responses that you are getting and sending data back after
> some
> > > > length operations or are you mostly receiving data to be processed
> and
> > > then
> > > > sending an "OK" response back.  In the case of the latter you can put
> > the
> > > > incoming data on a SEDA queue and immediately return an "OK".  Is it
> > that
> > > > the incoming request is resulting in a lot of number crunching,
> > database
> > > > calls, or other operations that take too long and the number of
> > requests
> > > > are bogging things down before sending a response back to the user?
> > > >
> > > > Camel has a wide range of components that can provide RESTful APIs.
> > They
> > > > are all going to be a little different in their behavior.  For
> example,
> > > the
> > > > Netty component is going to use NIO under the covers to handle
> incoming
> > > > data.
> > > > http://camel.apache.org/rest-dsl.html
> > > >
> > > > If you use Jetty you can look at the min and max settings on the
> thread
> > > > pool. Jetty also has continuations which frees up the incoming
> request
> > > > threads and uses a callback mechanism to send the response back when
> it
> > > is
> > > > finished.
> > > > http://camel.apache.org/jetty.html
> > > >
> > > > But really, a bit more detail and code about the use case and what it
> > is
> > > > you're trying to do would be helpful.  Do you want the request to
> send
> > an
> > > > error to the client if there are too many incoming requests? Why is
> the
> > > > number of concurrent requests a concern?  Is the incoming data large
> > > chunks
> > > > of data that are gobbling up memory or is the processing expensive or
> > > ???.
> > > >
> > > > On Sat, Oct 1, 2016 at 9:43 AM, Debraj Manna <
> [email protected]
> > >
> > > > wrote:
> > > >
> > > >> Hi
> > > >>
> > > >> I have seen Throttler <http://camel.apache.org/throttler.html> in
> > > camel.
> > > >> Is
> > > >> there anything available in camel that restricts the number of
> > > concurrent
> > > >> accesses something like this as mentioned here
> > > >> <https://github.com/google/guava/blob/master/guava/src/
> > > >> com/google/common/util/concurrent/RateLimiter.java#L41>
> > > >> ?
> > > >>
> > > >> Also  the below seems to be a more generic query but thought of
> asking
> > > >> here
> > > >> if anyone can provide some thoughts on this:-
> > > >>
> > > >> I have observed that most of the RESP APIs does rate limiting on
> > > requests
> > > >> rather than restricting the number of concurrent requests. Is there
> > any
> > > >> specific reason?
> > > >>
> > > >
> > >
> >
>

Reply via email to