Hmm.. Thinking that my use case or variants of try/finally semantics can't be that uncommon, I was kind of hoping for a more definitive answer for this.
(I'm pretty sure we're not doing async invocations. I don't think we have one-way methods, but I'll take a closer look) Would it perhaps be safer to look into adding JAX-WS handler chains on the port or dispatcher Binding? Eirik. On Wed, Apr 8, 2015 at 10:07 PM, Daniel Kulp <[email protected]> wrote: > Do you have any one-way methods? For those, the “In” chain would never get > called and thus your counters would never get decremented. > > Another thing I’d need to think about would be use of the async methods that > would then timeout. For those, the outgoing chain would be done, but then > the incoming chains would never be called either. > > Dan > > >> On Apr 7, 2015, at 10:14 AM, Eirik Bjørsnøs <[email protected]> wrote: >> >> Hello, >> >> We have implemented a set of interceptors for limiting the number of >> concurrent invocations to each remote service. >> >> The idea is that each remote service (be it JaxWS dispatchers or SEI >> ports) should be invoked by at most N threads concurrently. If the >> remote service has more than N active invocations, we should instead >> return a soap fault with a message saying that the system is over >> capacity. (We want to limit the load to a the number of requests we >> know we can serve) >> >> If this was to implement this using servlet filters, I would have >> written a Filter.doFilter method like this:: >> >> int current = counter.incrementAndGet(); >> try { >> if(current > limit) { >> throw new ServletException("System is over capacity"); >> } >> chain.doFilter(request, response); >> } finally { >> counter.decrementAndGet(); >> } >> >> However, CXF's interceptor design doesn't lend itself to use Java's >> try/catch syntax. >> >> What is the proper way of implementing try/catch semantics using CXF >> Interceptors on a Client? >> >> What we currently do: >> >> 1) We're adding a QosBeforeInterceptor early in the Client's out >> interceptor chain. This implements handleMessage. We increment the >> counter. If it is larger than N we throw a SoapFault. In this case >> handleFault gets called (by CXF), where we decrement the counter. >> >> 2) We're adding a QosAfterInterceptor early in the Client's in >> interceptor chain. This implements handleMessage where we decrement >> the counter. >> >> So by design the counter is always incremented by >> QoSBeforeInterceptor.handleMessage. It should always be decremented by >> either QoSBeforeInterceptor.handleFault OR by >> QoSAfterInterceptor.handleMessage. >> >> We are seeing reports from production indicating that in some cases >> the counter might have been incremented, but not decremented for a >> message, leading to a "leak" of counts. This causes the service to be >> blocked for access indefinitely when it reaches N. >> >> I have not been able to reproduce this "leak" situation offline, >> experiementing with various exception and timeout scenarios. So I'm >> not 100% sure our design is broken. >> >> Still I would very much like some feedback on my try / catch semantics >> implementation and the assumtions it is based on: >> >> A) When a SoapFault is thrown from my interceptor's handleMessage, it >> will always lead to handleFault being invoked on the same interceptor? >> B) When a SoapFault is thrown from handleMessage by some later >> interceptor, this always leads to handleFault being called on all >> earlier interceptors in the same chain? >> C) If handleFault is not called on the outgoing chain of a Client, >> then handleMessage will always be called on an interceptor on the >> Client's incoming chain? >> >> I am trying to investigate reasons for the counter being incremented, >> but not decremented. That is, cases where handleMessage is called on >> the out chain, but neither outgoing handleFault nor incoming >> handleMessage is called. >> >> Assuming I can't be the first to implement try/catch semantics using >> interceptors, maybe someone can spot a weakness in my design? >> >> Cheers, >> Eirik. > > -- > Daniel Kulp > [email protected] - http://dankulp.com/blog > Talend Community Coder - http://coders.talend.com >
