I also have a similar requirement. What solution did you use for
communication between synapse and webservice hosted in the same jvm?

Andreas Veithen wrote:
> 
> 
> On 05 Jan 2008, at 03:55, Asankha C. Perera wrote:
> 
>> Andreas
>>> * By acquiring a RequestDispatcher for a servlet from the foreign
>>> context and invoking forward or include on this object.
>>>
>>> The first approach can be discarded, again because of class loader
>>> issues and the fact that it would require changes to the second
>>> application. The second approach is more promising because it allows
>>> to invoke the servlet that implements the target service. Note  
>>> however
>>> that there is an important restriction imposed by the servlet
>>> specification on the use of RequestDispatcher#include/forward: "The
>>> request and response parameters must be either the same objects as
>>> were passed to the calling servlet’s service method or be subclasses
>>> of the ServletRequestWrapper or ServletResponseWrapper classes that
>>> wrap them." This means that these methods can't be called with
>>> arbitrary HttpServletRequest/Response implementations. Obviously this
>>> requirement is there to guarantee that the server gets access to the
>>> original request/response objects. It could be that some servers do
>>> not enforce this requirement, but for those that do, the practical
>>> consequence is that the RequestDispatcher approach would only work if
>>> the original request is received by SynapseAxisServlet, which is
>>> perfectly compatible with the use case described here.
>>>
>>> My question is whether anybody on this list is aware of an existing
>>> Axis2 transport implementation that uses this approach or if anybody
>>> here sees any obstacle that would make it impossible to implement  
>>> this
>>> kind of transport.
>> Since Synapse would most frequently modify the original request and
>> forward a new (e.g. transformed request of the original) to the back  
>> end
>> service, I doubt if this approach would be feasible.. however,  
>> although
>> you could theoretically use Synapse with a servlet transport, we built
>> the non-blocking http/s transports for Synapse, as acting as an ESB it
>> needs to handle many connections concurrently passing them between the
>> requester and the actual service. If using a servlet transport, the
>> thread invoking Synapse would block until the backend service replies,
>> and this could lead to the thread pool being exhausted very soon. The
>> objective of the non-blocking http/s transport is to prevent this  
>> and it
>> works very well when load tested..
> 
> I understand that in a full-blown ESB use case where you have many  
> clients connecting to Synapse and where Synapse dispatches those  
> requests to many different backend services, the servlet transport is  
> not a good solution because you will see many worker threads sleeping  
> while waiting for responses from remote services, causing the thread  
> pool of the servlet container to be exhausted very soon. However the  
> use case considered here is different: all incoming requests are  
> transformed and dispatched to a target service deployed in the same  
> container. In this scenario thread pool exhaustion is not a problem  
> because each sleeping worker thread used by Synapse will be waiting  
> for one and only one worker thread used by the target service.  
> Therefore there will never be more than 50% of the container's worker  
> threads used by Synapse and waiting. This implies that more worker  
> threads (roughly the double) are needed to get the same level of  
> concurrency, but this is not a problem because the first limit that  
> will be reached is the number of concurrent requests the target  
> service is able to handle.
> 
> Note that if a RequestDispatcher could be used to forward the  
> transformed request to the target service, thread allocation would be  
> optimal, because the whole end-to-end processing (Synapse -> target  
> service -> Synapse) would be done by a single worker thread.
> 
>> Anyway when two apps communicate on
>> the same host, the TCP overheads are reduced AFAIK by the OS's and the
>> calls passed through locally
>>
> 
> I don't know how Windows works, but for sure, on Unix systems local  
> traffic will have to go through the whole TCP/IP stack down to the IP  
> level. The only optimization is that the MTU of the loopback interface  
> is higher than on a normal network interface, meaning that the TCP  
> stream is broken up into larger segments.
> 
> Regards,
> 
> Andreas
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Using-some-sort-of-in-VM-transport-between-Synapse-and-a-web-app-deployed-on-the-same-server--tp14627006p22673714.html
Sent from the Synapse - User mailing list archive at Nabble.com.

Reply via email to