Frederick,
  It seems that it is not a matter of whether it can be done, but whether it should be done. As far as I can tell from what you have submitted, the intermediaries must be able to a) process the headers at all, ie we must have some kind of paradigm implemented where the headers are recognizable, and b) The intermediaries must contain logic necessary to be able to process the multiple headers, in varying order, according to a changing set of criteria. This we have established. My problem is that you are saying that the criteria is being established by a client application who is detailing to the service provider how a message is to be processed. This should not happen for many reasons. 1) It is a major security breach because you expose the entirety of you intermediary services to the outside world which is offering a buffet of information to potential intruders. 2) The client application thickens considerably in that now dynamic changes over time must be accounted for at this remote layer. 3) If it comes to pass that we must implement a custom processing scheme at the intermediary layer, it is best to isolate this programming and make it applicable to other clients, many of which may not yet exist. When you couple this client and this intermediary, you essentially bar expansion on both sides of the river.
 It comes down to who commands the processing sequence.
 Service providers and supporting intermediaries dictate  the sequence in which processing should occur and the current aim is to move away from any dependence on the client in this decision-making process because it defeats the concept of the Enterprise Service Bus altogether to house this understanding outside the service provider layer and its immediate supporting applications. This actually is a throwback to the whole interface concept in C++, CORBA, and later java, I want to expose an interface but telling the client the details regarding my application causes problems. If I begin telling clients how to route mesages, where does it end? How do I add useful capabilities to my service provider layer without breaking the coupling between client and server. In my humble opinion, I think that what you should be focusing on here is declarative coupling between service providers and dynamic routing of messages after they enter some type of intermediary gateway which is charged with the responsibility of message routing based on a policy. So we A) isolate the clients according to whatever WSDL has been published, but the client is blind to everything but the essential requirements. b) We implement a gateway intermediary that accepts the client invocation and applies a policy. even if this triggers a header rewrite, which is fine as long as it is stripped when going outside of a secure paradigm, C) implement supporting intermediaries that route according to the policy established.
Thus, the intermediaries route based on the invocation, and the policy applies to the invocation alone. Consider, you said you may want to log, verify, and de-encrypt, or perform some other useful action, do we do this on the client, or on the invocation itself? If a client says do this, how do I implement the policy and be secure, and if I implement the policy to ensure verification or de-encryption for example, then I don't need the client to specify anything for me. All in all, it seems the same steps must be applied in either case so eliminating the client dependency is the correct move in terms of design.
The usual response here is what about differentiation of clients, ie- one client requires said security while another does not. In this case, it makes no sense to deal with the header issue either because the coming standards for WSI interoperability and highly detailed WSDLs will eventually mandate seperate operation signatures and port urls, which will then use URL rewriting to realign on the back end when invoking the same service. So, we seem to be agreed here that you have to develop custom intermediaries. However, in this revised case, you move the responsibility from the client to the first intermediary and end up with a much more scaleable application. I think if you look at some of the more mainstream ESB based products you will see this multi-step protocols trying to establish this paradigm. Particular examples are SOA appiances such as datapower and reactivity, and the more scaleable ESB products such as BEA and websphere. If I am completely wrong here, please forgive me, everyone misfires. I just spend some time trying to get a grip on the client layer design issues and the scaleability problems of your description and have to wonder if perhaps there was a more eloquent way to solve this very important issue.
-Thomas

Reply via email to