Hello Wolfgang,


On Thursday, Nov 13, 2003, at 13:29 Europe/Ljubljana, Wolfgang Vullhorst wrote:


Hi,
am a bit confused by some of your statements. Perhaps you can help me with
that...
Of course, I will try to.

Wolfgang


Borut Bolcina [mailto:[EMAIL PROTECTED] wrote:

Hi, again


since I had no luck, nobody answered to my question posted a
month ago
(subject WebServices chaining), i'll try again.



Can you please help me understand just how can one implement this
'simple' scenario.

CLIENT ---> INTERMEDIARY ---> ENDPOINT

Many clients will connect with different "payload weight", which
roughly determines the computational time for each request. The
ultimate goal would be that intermediary node acts as a queue and a
load balancer for a number of endpoints.

so you want to implement something like a communications bus, right? load balancer ok, but why should it act as a queue?

I think communication bus is not the right word. As you mention below, I didn't realize that this intermediary node can pump requests to a selected endpoint without waiting first for the response to came back. That is why I thought a queue would be needed. But there is a scenario with queue in place - if ever a priority requests will be an implementation demand, then this queue would be a necessity. Imagine several hundreds of low priority requests coming in and then just one high priority request shows up. It should be served as fast as possible. Of course it is much easier (read cheaper) to buy another machine for high priority customers! But...


My first goal would be just simply the intermediary to receive the request, read the headers, make some database operations, make some choices and if everything is ok, forward the request with new headers to "real" web service (ENDPOINT) to do the job. If successful, the intermediary again does some processing and returns the result back to the originator of the request - the CLIENT.

Now, what happens if in the middle of this process another request
comes in? The intermediary should be clever enough to know that the
ENDPOINT is busy and act accordingly.

what do you mean with "busy"? why should the endpoint only receive one request at a time?

I was probably wrong in my concept. Do I really need to know if the endpoint is still processing the previous request before handing it a new one? That is not the case, is it?

I read about synchronous and
asynchronous requests and other theoretical stuff which I
could find on
the internet, but I am still confused. If asynchronous services would
be a way to go, then I guess the clients will have to be smarter and
the choice of technology narrower. If request-response
mechanism could
do the job, it would be easier, wouldn't be?

yes...



I am aware (it is even a demand) that I will have to implement several ENDPOINTs which would work as a farm of the same services to increase the efficiency and the reliability of service will be one of the major questions, so I don't want to start implementing a bad architecture which will limit my options later on.

Architectures using webserver/servlet engine or application server allow to
process more than one request at a time. Clustering is only necessary for
failover/load balancing mechanisms.


So for a scenario without load balancing the architecture would be something like this:

client (written in whatever language) --> application server (WebObjects with Axis) acting as intermediary webservice --> endpoint (python powered webservice)



I have written a client which I installed on several machines to stress test the ENDPOINT web service. I bombarded the poor workhorse from those machines with requests coming apart just under a second from each. Each response lasted roughly from 15 to 45 seconds, but they all got processed successfully. Now, I would be a happy person if I could be able to squeeze this intermediary in-between to do some logging. I guess HTTP nature of request-response loop handled the queue for me. How do I do this with intermediary node?

No queue but several threads to process the requests. Perhaps you use a
resource in your webservice that is only available once, so that all threads
have to wait for it?


I understand that several threads can be spawned each for one request. I misused the word queue.
There will be cases when the same resource will have to be fetched from a database.



Does my question narrows down to handling sessions? If client sends a request which takes a long time to process, how to handle another client's request which came seconds after the first one?

Sessions/Threading does this for you


OK, would this be a viable algorithm:

Client generates a request.
Intermediary reads the headers of a request and determines if this is a valid request based on some value in the header.
If valid, a session is created.
Intermediary does some processing (logs request, makes some db operations).
Intermediary alters headers and sends the request further to the endpoint (do I need to insert session ID or something unique in the headers ??.
Endpoint processes the request and returns the result or fault (does the response have to have this session ID in the headers ??)
Intermediary does some db operations, inserts new headers and make (?) a response. (How do I know where to send the response??)
Client gets the response or fault.


As you see I put double question marks in this algorithm. What do I need to know to remove them :-)

Thank you very much for your time!

Best,
bob

Am I missing something very crucial here?


I am developing with Java and WebObjects as the application server,
just to note, but not to disturb you. WO uses Axis engine.

Regards,
bob





Reply via email to