Hi,
  am a bit confused by some of your statements. Perhaps you can help me with
that...
Wolfgang


Borut Bolcina [mailto:[EMAIL PROTECTED] wrote:
>
> Hi, again
>
> since I had no luck, nobody answered to my question posted a
> month ago
> (subject WebServices chaining), i'll try again.
>
>
>
> Can you please help me understand just how can one implement this
> 'simple' scenario.
>
> CLIENT ---> INTERMEDIARY ---> ENDPOINT
>
> Many clients will connect with different "payload weight", which
> roughly determines the computational time for each request. The
> ultimate goal would be that intermediary node acts as a queue and a
> load balancer for a number of endpoints.

so you want to implement something like a communications bus, right?
load balancer ok, but why should it act as a queue?

>
> My first goal would be just simply the intermediary to receive the
> request, read the headers, make some database operations, make some
> choices and if everything is ok, forward the request with new headers
> to "real" web service (ENDPOINT) to do the job. If successful, the
> intermediary again does some processing and returns the
> result back to
> the originator of the request - the CLIENT.
>
> Now, what happens if in the middle of this process another request
> comes in? The intermediary should be clever enough to know that the
> ENDPOINT is busy and act accordingly.

what do you mean with "busy"? why should the endpoint only receive one
request at a time?

> I read about synchronous and
> asynchronous requests and other theoretical stuff which I
> could find on
> the internet, but I am still confused. If asynchronous services would
> be a way to go, then I guess the clients will have to be smarter and
> the choice of technology narrower. If request-response
> mechanism could
> do the job, it would be easier, wouldn't be?

yes...

>
> I am aware (it is even a demand) that I will have to
> implement several
> ENDPOINTs which would work as a farm of the same services to increase
> the efficiency and the reliability of service will be one of
> the major
> questions, so I don't want to start implementing a bad architecture
> which will limit my options later on.

Architectures using webserver/servlet engine or application server allow to
process more than one request at a time. Clustering is only necessary for
failover/load balancing mechanisms.

>
> I have written a client which I installed on several machines
> to stress
> test the ENDPOINT web service. I bombarded the poor workhorse from
> those machines with requests coming apart just under a second from
> each. Each response lasted roughly from 15 to 45 seconds, but
> they all
> got processed successfully. Now, I would be a happy person if I could
> be able to squeeze this intermediary in-between to do some logging. I
> guess HTTP nature of request-response loop handled the queue for me.
> How do I do this with intermediary node?

No queue but several threads to process the requests. Perhaps you use a
resource in your webservice that is only available once, so that all threads
have to wait for it?

>
> Does my question narrows down to handling sessions? If client sends a
> request which takes a long time to process, how to handle another
> client's request which came seconds after the first one?

Sessions/Threading does this for you

>
> Am I missing something very crucial here?
>
> I am developing with Java and WebObjects as the application server,
> just to note, but not to disturb you. WO uses Axis engine.
>
> Regards,
> bob
>

Reply via email to