> -----Original Message----- > We still need to support it at some level of course. And in > the future we can support it at the EPR level too. What is so > bad about that? > > > > And as I stated, that would seem to defeat the purpose. > > > For this single case. And in the future this single case > probably won't be valid. So your objection here really > doesn't carry much weight.
Well, I beg to differ ... you're motivating your proposal as a mechanism that would be *consistent across transports*. Its not logical IMO to then turn around and disregard one transport that the (supposedly standard) mechanism wouldn't work for. > > Seems a lot simpler to me to drive this from config, which we know > > works for every transport. > > > > Actually, let me restate that ... > > > > It would seem simpler to drive this via policies, that may be taken > > from static config files, or equivalently may be dynamically set by > > the application. I believe Andrea is using the latter approach to > > control the decoupled endpoint(s) used in system test she's writing. > > > > Maybe that would be a compromise that give the best of both worlds? > > > > I would like to maintain some cardinality restriction on the > > (automatically launched) decoupled response endpoint, by keeping it > > per-Conduit as opposed to per-request (for reasons of > lifecycle mgmt > > and non-proliferation as I explained earlier on this thread). > > > > But apart from that, I've no problem with the URI (or whatever the > > transport needs) originating from the application code as > opposed to > > the cxf.xml. > > > Can you be more specific about what you mean by policy > driven? I looked through the RM code a bit, but I'm still not > sure what you're referring to. Look at the RM system test. Specifically the last few lines of SequenceTest.setupGreeter(). > I'm fine with limiting automatic launching to be per-Client. Great as that's my main issue, i.e. to avoid a proliferation of automatically launched decoupled endpoints. > And just to be incredibly clear, I'm specifically trying to > get away from the xml approach and make complete control of > the endpoints easy when using the Client API. > > > > > > I think you're oversimplifying the issue. Either you're trying to: > > > a) count the number of clients which are actively invoking. > > > In which case when I stop sending and then resume a minute later > > > you're going to have to start it all over again, which > wouldn't make > > > a lot of sense. Especially in a single thread scenario as > it would > > > be starting & stopping a server during each invocation. > > > > > > b) Implement a timeout mechanism. In which case it would work > > > equally well when setting a reply-to EPR on a Client. > > > > > > c) Implement a refcount & timeout mechanism. i.e. if I > call close on > > > my conduit, but another client still has the same decoupled > > > server/port open we would obviously want it to stay up. > Which once > > > again works equally well when setting a reply to EPR on > the client. > > > > > > d) Trying to call Conduit.close() when the client is > being GC'd. But > > > there is no sure way to do this Java. Even if you could, > this would > > > once again work equally as well when setting an EPR on the Client. > > > > > > There's no question of what's there being an attempt to be > timeout-based > > or reliant on GC. > > > > Instead the idea in the original Celtix code was to use a reference > > counting scheme for the decoupled response endpoint, and to > allow this > > to be shared across client transport instances. This was simply not > > ported over properly to CXF. > > > > The original scheme worked as the HTTPClientTransport was > created once > > per binding instance, had well-defined shutdown semantics, > and reused if > > possible a pre-existing listener for the decoupled endpoint, even if > > this was created from another HTTPClientTransport. This > reuse was easy > > to do as HTTPClientTransport registered the Jetty handler directly, > > instead of going thru' the DestinationFactory, and thus could easily > > check if a pre-existing handler was already registered. > > > I don't see how this gets around the issues I mentioned in > (a). It sounds > like the deocupled destination would stick around until you > shut down the > HTTPClientTransport. And there is no way to automagically > shut down the > client transport really. But you're proposing an explicit Client.close() API to handle this, no? > > > This brings up an interesting point: Currently I can only > > > associate a decoupled destination with a client's conduit > > > AFAIK. But this makes absolutely no sense to me - there are > > > many decoupled destinations that could be associated with a > > > client. For instance it might have a different acksTo then > > > ReplyTo. Or I might have a different FaultTo. > > > > > > I don't think you're correct here. If I go and explicitly set the > > replyTo to a Destination that I've created (via a > DestinationFactory) > > then this will be used for the <wsa:ReplyTo> in the > outgoing message, as > > opposed to the back-channel destination overwriting the explicit > > setting. > > > > Similarly the acksTo could be set to any Destination, but RM just > > happens to be implemented to use the back-channel destination for > > convenience. By convenience, I mean it avoids the RM layer > having to set > > up a separate in-interceptor-chain to handle incoming out-of-band > > messages. > > > > The per-Conduit restriction only applies to *automatically launched* > > decoupled response endpoints. The application can go nuts explicitly > > creating response endpoints all over town if it wants ... > > > > First, I was talking about from a configuration point of view. > > Second, doesn't this kind of defeat the point of having the decoupled > destination in the conduit? Nope I don't think it defeats the point. The point being that the lifecycle of any automatically launched decoupled endpoint is the *responsibility of the CXF runtime*, whereas the lifecycle of any Destinations explicitly launched by the application is of course the *responsibility of the application itself*. If we limit the cardinality of the automatically launched decoupled endpoint to one-per-Conduit (equivalently, one-per-Client), then we have a well-defined point at which it makes sense to close the endpoint (i.e. when the Conduit is closed, as a side-effect of your proposed new Client.close() API). If we do not limit the cardinality of the automatically launched decoupled endpoints, then we'd have to either let these accumulated endpoints remain active until either the Client is close()d or the application exit()s, or we'd have to guess when it would make sense to shutdown a seemingly inactive decoupled endpoint. But this guesswork is problematic, as the decoupled endpoint could have been specified as the acksTo for some RM sequences. It would be invalid for example to take the approach ... hey there's no outstanding MEPs for which this endpoint was specified as the replyTo so lets just shut it down. Obviously that would pull the rug out from under RM, which may receive any number of incoming out-of-band messages on that endpoint until the sequence is terminated, and AFAIK by default we to allow the sequence to proceed indefinitely rather than actively terminating and starting up a new one every N messages or whatever. On the other hand, if the application wants to make many invocations on a single Client, each with a different replyTo, then its welcome to set up the relevant Destinations itself and then explicit shutdown() when its done with each. The app knowing the appropriate point for the shutdown to occur is the crucial point. /Eoghan
