There isn't really a good software-only solution for this.  If the remoting
method uses Delegate.BeginInvoke or any other thread-pool based approach to
pass the long request off to someone else, the problem is still the same
since they're all being managed roughly the same way.  You'd have to pass
really long-running requests off to a thread of your own.

But the decision to program the IOCP the way they did makes sense - it
normally wouldn't do you any good to have more threads than there are
processors vying for time on the processor if they're going to be doing
nothing but burning up processor cycles.  You're just creating work for the
scheduler and actually increasing the amount of wall clock time needed to
get the job done.  Better to just let one server thread per processor get on
with the business at hand and get done as soon as possible.  If you write
your remoting server to pass the work off to a thread of your own, then you
might end up with many clients asking you to do the same thing, creating
loads of threads in a very unscalable fashion - all fighting over a small
number of processors and actually slowing things down by saturating the
system with threads.

That said, one advantage to being allowed to still service a remote method
invocation under such a cpu intensive scenario is the ability to
cancel/abort those long operations by means of making a different remote
method call.  So even if someone submits a long-running request(s) that ties
up the processor(s), you could still make a subsequent method call to the
server to abort the operation (assuming your programmed the server to pay
attention to such requests).

I really think that this is one of those problems where there isn't a really
great general purpose software solution to be had.  You really need more
hardware in the form of multiprocessor boxes, and more than one of those
boxes setup in a load-balanced server-farm sort of scenario.  The decision
that was made is arguably the better decision for the design of a general
purpose framework like we're talking about.

If you have some control over, or knowledge of, the # of clients that will
be interacting with your system such that you're not in danger of having an
uncapped number of clients submit such long-running cpu-intensive requests,
then you can pass of those long requests to dedicated threads, blocking
until that threads finishes the job.  Theoretically, this will free up the
thread-pool threads to service subsequent remote method invocations,
allowing you to design in support for cancellation, etc.  Keep in mind,
though, that one of the criteria that's used by the thread manager to
determine whether or not to add a new thread to the pool is cpu utilization.
So if your dedicated worker thread(s) are saturating the processor(s), then
going to this trouble still won't do any good because the thread pool
manager wouldn't add a new thread to the mix anyways because it would just
make the situation worse.

-Mike
http://staff.develop.com/woodring
http://www.develop.com/devresources


----- Original Message -----
From: "Marco Russo" <[EMAIL PROTECTED]>

A better way to solve the problem is to use an async call to the
function code, leaving the Remoting interface only the role of listener.
Actually, for what I know, there is no automatic way to dispatch in
different threads long remoting calls.

Marco

You can read messages from the Advanced DOTNET archive, unsubscribe from Advanced 
DOTNET, or
subscribe to other DevelopMentor lists at http://discuss.develop.com.

Reply via email to