Hi Sergei,
> -----Original Message-----
> From: Sergey Beryozkin [mailto:[email protected]]
> Sent: Donnerstag, 21. November 2013 11:35
> To: [email protected]
> Subject: Re: Any existing implemention on limiting calling frequencies by
> client or by IP
>
> Hi Dan
> On 20/11/13 15:54, Daniel Kulp wrote:
> >
> > On Nov 19, 2013, at 10:20 PM, Jason Wang <[email protected]>
> wrote:
> >
> >> Hi all,
> >>
> >> I would like to limit the frequencies our APIs can be called. Given
> >> that they will be public APIs.
> >> The limit will most likely be done on IP addresses.
> >>
> >> Is there existing mechanism in CXF for this? Otherwise I will create
> >> my own interceptor to do it.
> >
> > Currently, no. I had some similar discussions about this with some folks
> > last
> week related more about throttling per endpoint instead of per IP.
> However, many of the thoughts are the same. Kind of came up with this list
> of thoughts to think about:
> >
> > 1) What should happen if more than the needed requests come in? Should
> they be queued and processed later? Should a fault be thrown? Should
> some number be queued and then a fault thrown beyond that? Lots of
> possible config options here.
> >
>
> May be we can ship a couple of basic interceptors which would return 503 if
> the rate exceeds. One pair of interceptors would go to the core and would
> simply check how many concurrent requests are under way, another pair will
> go to the http module and it will rate the individual client IP addresses, the
> ideas you suggested below can further be explored to support more
> advanced options
Yep, I find that this option is a nice first step for CXF throttling. As Dan
said, I see more sophisticated implementation (with queuing) is more mediation
as middleware task.
I would also provide the possibility to activate these interceptors through
WS-Policy assertion with corresponded parameters.
Regards,
Andrei.
> Thanks, Sergey
>
> > 2) If you want to do this at an endpoint level via an executor, the CXF
> schemas do have an "executor" element for the jaxws:endpoint element
> that can be used to set a specific executor. There are a couple of "Executor"
> things that can provide limits that may be able to plug right in here. That
> said,
> I'd discourage this. When using an executor, when a request comes in on a
> Jetty (or other transport thread), we have to place the request on the
> executor and then block the transport thread until the request finishes.
> Thus, it ties up two threads and jetty cannot process more while it's waiting.
> That said, there is definitely a possible enhancement here. If using a
> transport that supports the CXF continuations, we COULD start a
> continuation prior to flipping to the executor. Something to think about a
> bit
> more.
> >
> > 3) Possibly the more "correct" answer to this is that this is a
> mediation/Camel feature, not a service feature. CXF is about
> creating/exposing services. Placing quality of services requirements around
> that service is a mediation thing. That could be considered Camel's job.
> This
> could be a from("jetty://....").throttle(...).to("cxf:...") type thing. Not
> sure if
> the Camel throttling has support for per-ip throttling or not. Would need to
> investigate more.
> >
> > 4) You likely could implement this as a set of CXF interceptors that could
> use the Continuations to "pause" the request for a few milliseconds or similar
> if the load is too high. Would require some extra coding. Contributions
> back
> could be welcome.
> >
> > 5) Jetty (and likely Tomcat and others) do have some throttling control
> > built
> in at the servlet engine level. You may want to investigate that.
> Additionally, if you run your web service behind an Apache proxy, I believe
> the mod_proxy stuff in Apache has some settings for this.
> >
> > Anyway, lots of thoughts, but haven't had time to really look into any of
> them yet.
> >