Hi Bob,

I would like to comment on the following line.

CLIENT ---> INTERMEDIARY ---> ENDPOINT

Are you talking about the "INTERMEDIARY" mentioned in the SOAP specification?

We (myself and my team mates) had a long discussion \ arguments on this
matter few months back. The question was

What is this "Intermediary"? Can we consider a handler as an intermediary?
Or should we call a SOAP node such as a SOAP engine ?

Our final conclusion was that a Intermediary should be a SOAP engine not a
soap handler. Of course we should not argu on mere words - but meaning of
the words.


>> Many clients will connect with different "payload weight", which
>> roughly determines the computational time for each request. The

I am not an ANN expert, but this sounds like a neural network :-), so if
you want a network (somethign like a neural network) , are you planning to
consider a handler as a node?

Of course reading through your mail I think you might have to use a
handler as a intermediatry to provide the required functionality, but the
problem is Axis engine controls handler chains (you can refer the
architecture guide). Some computations done at a specific handler cannot
control the flow of the message. So you will have a hard time configuring
a handler as a node in the network - especially if it is going to be more
than 2 layers.

If you are really keen about this, one option is you can write a your own
engine !!!!

Regards,
Dimuthu.


>> My first goal would be just simply the intermediary to receive the
>> request, read the headers, make some database operations, make some
>> choices and if everything is ok, forward the request with new headers
>> to "real" web service (ENDPOINT) to do the job.




-- 
Lanka Software Foundation  http://www.opensource.lk

> Hi,
>   am a bit confused by some of your statements. Perhaps you can help me
> with
> that...
> Wolfgang
>
>
> Borut Bolcina [mailto:[EMAIL PROTECTED] wrote:
>>
>> Hi, again
>>
>> since I had no luck, nobody answered to my question posted a
>> month ago
>> (subject WebServices chaining), i'll try again.
>>
>>
>>
>> Can you please help me understand just how can one implement this
>> 'simple' scenario.
>>
>> CLIENT ---> INTERMEDIARY ---> ENDPOINT
>>
>> Many clients will connect with different "payload weight", which
>> roughly determines the computational time for each request. The
>> ultimate goal would be that intermediary node acts as a queue and a
>> load balancer for a number of endpoints.
>
> so you want to implement something like a communications bus, right?
> load balancer ok, but why should it act as a queue?
>
>>
>> My first goal would be just simply the intermediary to receive the
>> request, read the headers, make some database operations, make some
>> choices and if everything is ok, forward the request with new headers
>> to "real" web service (ENDPOINT) to do the job. If successful, the
>> intermediary again does some processing and returns the
>> result back to
>> the originator of the request - the CLIENT.
>>
>> Now, what happens if in the middle of this process another request
>> comes in? The intermediary should be clever enough to know that the
>> ENDPOINT is busy and act accordingly.
>
> what do you mean with "busy"? why should the endpoint only receive one
> request at a time?
>
>> I read about synchronous and
>> asynchronous requests and other theoretical stuff which I
>> could find on
>> the internet, but I am still confused. If asynchronous services would
>> be a way to go, then I guess the clients will have to be smarter and
>> the choice of technology narrower. If request-response
>> mechanism could
>> do the job, it would be easier, wouldn't be?
>
> yes...
>
>>
>> I am aware (it is even a demand) that I will have to
>> implement several
>> ENDPOINTs which would work as a farm of the same services to increase
>> the efficiency and the reliability of service will be one of
>> the major
>> questions, so I don't want to start implementing a bad architecture
>> which will limit my options later on.
>
> Architectures using webserver/servlet engine or application server allow
> to process more than one request at a time. Clustering is only necessary
> for failover/load balancing mechanisms.
>
>>
>> I have written a client which I installed on several machines
>> to stress
>> test the ENDPOINT web service. I bombarded the poor workhorse from
>> those machines with requests coming apart just under a second from
>> each. Each response lasted roughly from 15 to 45 seconds, but
>> they all
>> got processed successfully. Now, I would be a happy person if I could
>> be able to squeeze this intermediary in-between to do some logging. I
>> guess HTTP nature of request-response loop handled the queue for me.
>> How do I do this with intermediary node?
>
> No queue but several threads to process the requests. Perhaps you use a
> resource in your webservice that is only available once, so that all
> threads have to wait for it?
>
>>
>> Does my question narrows down to handling sessions? If client sends a
>> request which takes a long time to process, how to handle another
>> client's request which came seconds after the first one?
>
> Sessions/Threading does this for you
>
>>
>> Am I missing something very crucial here?
>>
>> I am developing with Java and WebObjects as the application server,
>> just to note, but not to disturb you. WO uses Axis engine.
>>
>> Regards,
>> bob



Reply via email to