Am 25.06.2006 um 14:57 schrieb Stephen Deasey:


Here's a question though: if you have URL limits and per cache limits,
which takes precedence?

Good question!

As I'm reading your response I now get the idea where you're
heading. This makes much sense if you have a very clear idea
what your "job" is.

In the case of fetching a dynamic page, the "job" is clear:
it is the time between the start of request and the time
when the last byte of the page is returned. This is more-less
the main job of an web-server.

But in the case of an arbitratry call to some long-running
procedure, as in application server(s), things become complicated
as "job" is difficult to define. You can think of a job as a
procedure call or call to a predefined sequences of procedures,
or... what??

Take for example our application which uses NS as a general
purpose application server. It starts a "job" of saving couple
of TB of user data on some tape media. Underneath this is just
a procedure call. But inside, we start numerous threads, call
100's of internal subprocedures, do socket comm to several
other instances of NS running on other hosts etc...
We use all sorts of internal timeouts there. Yet, we do not try
to sum them all and project a time in future when a "job" should
finish! We do not know when that will be, as million things can
come in between. I will not go into more detail because I'm sure
you understand what I mean.

I must say that I will have to re-think twice or more the idea
you are proposing and see how this will affect us, as this is
not trivial. And it is not compatible to any code we have running
now. The idea itself seems very appealing but I have headaches
when I start to think about implications... :-(

Cheers
Zoran

Reply via email to