Hi Aki,

I haven't been following all of this, but isn't this really a situation where the client should be configured with a persistent store? That way when the client restarts it should resume using the previously-established sequences in each direction. Or does the current code only really support persistent stores on the provider side?

  - Dennis

On 06/05/2013 04:17 AM, Aki Yoshida wrote:
Hi Juan,

It looks like the problem occurs after the response retransmission from the
server at the client side at its decoupled receiving port. Because you have
a request-response call, there is a step to correlate this response message
to the original exchange. But the old exchange was gone when the old client
died. As this condition is not handled correctly, it results in an
unexpected exception and terminating the processing. As a consequence, the
client can never return an ack to the server, so the server will keep
resending the response to the client.

While getting this exception and unexpectedly terminating the process is a
bug, it is probably correct to reject the response from the server because
the client cannot deliver this response to the original requester. The only
information useful in the response is the ack for the original request.
This ack should be processed at the client to clean up any resource
associated with the original source sequence status. But there is no place
for the response payload to go.

If we could use a different programming model, the new client could pull
the old response using the persisted key of the old request. But for the
request response model, there is no way to deliver the response to the
original requester when it is permanently gone.

So basically, if you have a request-response service, there can be some
network errors any time during the calls but your client needs to remain
alive after transmitting a request until it receives its response.

I don't know your requirement from your scenario. What do you expect at the
client?  Typically, people use oneway calls and do any correlation needed
in the application level using two oneway calls. In that case, you don't
have this limitation.

regards, aki




2013/6/4 Aki Yoshida <[email protected]>

Hi Juan,
Thanks for uploading the logs. I am not yet 100% sure but it looks like
there is an issue in the response delivery for request-response ws-rm case
after an error.

As we have several persistent recovery tests for ws-rm, I thought
initially the problem after a successful response retransmission from the
server (hence my question about how the client was configured) but the
problem is occurring before. I'll look into it today.

regards, aki


2013/6/3 Juan Alberto Lopez Cavallotti <[email protected]>

Hi Aki,

Please find the log here: http://pastebin.com/B0TtSduG

About the client, it works correctly all the time when I use the CXF
facilities outside Mule, this is, connecting to a spring webapp with the
same service configured, my goal is to solve this for any type of client,
correct or incorrect.

Thanks,
Juan
MuleSoft


On Mon, Jun 3, 2013 at 10:14 AM, Aki Yoshida <[email protected]> wrote:

Hi Juan,
your attachment didn't get to the list. Maybe you should put it at some
remote storage that hat the http access.

and how is your client configured? can you make sure that you configured
the decoupled endpoint and persistence at the client?

regards, aki


2013/5/31 Juan Alberto Lopez Cavallotti <[email protected]>

Hi Aki,

Thank you for your reponse.

Yes, I have a decoupled endpoint, I have a standalone client based on
cxf's sample projects (nothing fancy added) which is currently working
fine, I'm killing it randomly so I can handle that kind of outages I'm
aware that you also have a custom interceptor for generating
communication
errors.

My problem context is the following:

I have exposed a service through Mule ESB facilities (I'm currently
trying
to fix a bug in Mule's code) which exposes a service as described on
this
section on the doc

http://www.mulesoft.org/documentation/display/current/Building+Web+Services+with+CXF#BuildingWebServiceswithCXF-CreatingaJAX-WSService
.

Also I have enabled WS-RM on the server via spring configuration file
as
I
showed before, so in this case Mule is responsible for building and
exposing the server endpoint though HTTP. This facility is powered by
our
UniversalConduit (source code link is on the previous comment).

I also have to note that when is no outage on either the client or the
server side, everything is works perfectly. Server receives the
requests
through its endpoint and answers to the client through the decoupled
endpoint.

Now the problem is:
When there is an outage on the backchannel (I.E client dies) then the
message is put in a retransmission queue (which is actually a good
thing)
but  for some reason the redelivery queue isn't able to even attempt
to
resend the retries and also never gives up.

Please find attached an execution log in debug level so you get the
sense
of what is going on. The last log sentences would repeat at-infinitum
until
I kill the server.

Now, I know there is a bug on our conduit (or any other part of our
code)
so I'm trying to understand why I'm having this behavior and
hopefully be
able to fix it.

Please let me know if you need more detailed information.

About upgrading the version, currently it is not so easy. We have
upgraded
to 2.5.9 for our latest release so maybe upgrading to 2.5.10 could be
an
option but for future releases.

Thanks for your help,

Regards,
Juan



On Fri, May 31, 2013 at 6:00 AM, Aki Yoshida <[email protected]>
wrote:
Hi Juan,
I have a couple of questions.

You mention of the backchannel, that means you have configured a
decoupled
endpoint where the server can asynchronously deliver ack messages
to? I
didn't see it at least in your beans.xml file. And you have a
request-response service right? That means there are application
messages
going in both directions.

And when you say no retry is happening back to the client after
restart,
have you enabled the persistence? If the client didn't persist the
sequence, it cannot handle the messages sent back on that sequence. I
didn't see the persistence enabled in your beans.xml. So it's not
clear
to
me if that is your entire configuration or you are adding additional
stuff
programatically.

So I still don't know if this is some inconsistent configuration or a
known
bug or a new bug/limitation.

regards, aki
p.s. In 2.6.x, there is an option to set the maximum number of
retransmission and there is also a way to terminate a message or a
sequence
permanently from the persistence over JMX. 2.5.1 is really old. Do
you
need
to stick to it or can you at least get to a more recent 2.5.10 or to
2.6.8?



2013/5/29 Juan Alberto Lopez Cavallotti <
[email protected]>
Hi Aki,

Thanks for getting back to me, if you wish to see the
implementation
of
the
conduit, here is a github link for it:



https://github.com/mulesoft/mule/blob/mule-3.3.2/modules/cxf/src/main/java/org/mule/module/cxf/transport/MuleUniversalConduit.java
What I'm trying to do is to make this conduit to handle outages on
the
backchannel correctly. The CXF version we're using is 2.5.1. The
scenario
is the following:

I have WS-RM working, on the happy path:

- Client creates a sequence.
- Server acknowledges on the backchannel.
- Client sends the request.
- Server answers on the backchannel.
- Client acknowledges the answer.

What is happening here is that when I have some outage on the
client.
For
example client dies suddenly. The message gets in the redelivery
queue
and
it gets stuck forever logging constantly that message.

I would like to understand how I can make the redelivery queue to
give
up
after a certain amount of retries but I believe currently is not
being
able
to retry so, I would like to understand the reason why.

Regards,
Juan


On Wed, May 29, 2013 at 2:50 PM, Aki Yoshida <[email protected]>
wrote:
I suppose you are seeing this warning because you have
configured no
separate channel (i.e.d, decoupled endpoint) for acks or response
delivery.
So when the http response connection is gone, you will get some
kind
of
stuck message until at least the next message comes in.

Can't say anything about the line 101 if we don't know the cxf
version.
In any case, if you don't (or can't configure a decoupled ep
because
of
your firewall rules), you should stick to oneway calls and
setting AcknowledgementInterval to 0 so that you get your request
ack'ed
in
its response.

If you have further questions, please describe your scenario in
more
details (version, req/resp or oneway, etc). And i don't know what
your
conduit is doing. So it's really hard what to say based on the
info
you
provided so far.



2013/5/29 Juan Alberto Lopez Cavallotti <
[email protected]
Hello,


I have a custom conduit implementation which takes care of the
integration
of CXF and MuleESB. I am able to use the WS-RM functionality on
the
happy
path over this conduit but when something goes wrong on the
backchannel I
get the message stuck on the redelivery queue and constantly
printing
the
following log statement:

WARN 2013-05-27 16:57:33,917 [RMManager-Timer-2051976295]
org.apache.cxf.endpoint.DeferredConduitSelector:
MessageObserver not found This is actually happening on line:
101
of
the
class org.apache.cxf.endpoint.AbstractConduitSelector

I would like to diagnose the cause of this situation.

Please find attached my configuration file.

Thanks for your help in advance.

Regards,
Juan Alberto López Cavallotti




Reply via email to