"Giovanni Bajo" <[EMAIL PROTECTED]> wrote: > Josiah Carlson <[EMAIL PROTECTED]> wrote: > > > It would be substantially easier if there were a distributed RPC > > mechanism that auto distributed to the "least-working" process in a > > set > > of potential working processes on a single machine. [...] > > I'm not sure I follow you. Would you mind providing an example of a plausible > API for this mechanism (aka how the code would look like, compared to the > current Python threading classes)?
import autorpc caller = autorpc.init_processes(autorpc.num_processors()) import callables caller.register_module(callables) result = caller.fcn1(arg1, arg2, arg3) The point is not to compare API/etc., with threading, but to compare it with XMLRPC. Because ultimately, what I would like to see, is a mechanic similar to XMLRPC; call a method on an instance, that is automatically executed perhaps in some other thread in some other process, or maybe even in the same thread on the same process (depending on load, etc.), and which returns the result in-place. It's just much easier to handle (IMO). The above example highlights an example of single call/return. What if you don't care about getting a result back before continuing, or perhaps you have a bunch of things you want to get done? ... q = Queue.Queue() caller.delayed(q.put).fcn1(arg1, arg2, arg3) r = q.get() #will be delayed until q gets something What to do about exceptions happening in fcn1 remotely? A fellow over in the wxPython mailing list brought up the idea of exception objects; perhaps not stackframes, etc., but perhaps an object with information like exception type and traceback, used for both delayed and non-delayed tracebacks. - Josiah _______________________________________________ Python-3000 mailing list Python-3000@python.org http://mail.python.org/mailman/listinfo/python-3000 Unsubscribe: http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com