>> 1) FORWARD REQUEST FROM WEB-SERVER TO SERVLET ENGINE
>> 2) WAIT FOR RESPONSE
>> 3) GET RESPONSE AND FORWARD TO WEB-SERVER.
>
>Well, I see it a bit different :-)
>
>1. Apache sends a message to tomcat with the original request 
>( or part of it ! - for example it can send only some headers that are
>commonly used ), then start waiting.
>
>2. Tomcat receives the message, start processing. When he 
>needs something
>from apache ( like sendHead or get info or auth or admin commands ) it
>sends a message, then start waiting.
>
>3. That goes on, with a message acting as a token. At any time 
>one side is
>listening, and the other ( who reveived the last message ) has the
>control. 
>
>In a way it's like a single "virtual" thread of execution, 
>with the apache
>thread and tomcat thread passing control via messages.

>
>Now, I know this sounds complicated - but it's a good solution 
>with very
>little overhead. This is not a general RPC protocol, but something
>specialized for tomcat, and it works very well with single-threaded
>processes. 

The only reserve will be about speed. Won't these write() and read() 
make web-server too slow. But that's interesting and will need testing
in real condition. AJP14 could test these experimental schemas.

>Even in Apache2.0 - creating threads for each tomcat callback and using
>additional sockets is significant overhead given the time constraints.
>
>The problem is that the "admin" commands can be passed only on certain
>moments: 
>- when apache connects for the first time ( the socket will be 
>kept open )
>- when apache sends a request
>
>I think that should be enough for what we need - if we make it more
>complex we may add too much overhead. ( and we may not be able to have
>good  support for Apache1.3 )

More overhead will make ajp14 slower than ajp13, or even ajp12 and
that's not the goal since admin message are only 1% of the real
traffic....

Reply via email to