On Nov 6, 6:05 pm, Walter Overby <[EMAIL PROTECTED]> wrote: > I don't understand how this would help. If these large data > structures reside only in one remote process, then the overhead of > proxying the data into another process for manipulation requires too > much IPC, or at least so Andy stipulates.
Perhaps it will, or perhaps not. Reading or writing to a pipe has slightly more overhead than a memcpy. There are things that Python needs to do that are slower than the IPC. In this case, the real constraint would probably be contention for the object in the server, not the IPC. (And don't blame it on the GIL, because putting a lock around the object would not be any better.) > > 3. Go tohttp://pyro.sourceforge.net, download the code and read the > > documentation. > > I don't see how this solves the problem with 2. It puts Python objects in shared memory. Shared memory is the fastest form of IPC there is. The overhead is basically zero. The only constraint will be contention for the object. > I understand Andy's problem to be that he needs to operate on a large > amount of in-process data from several threads, and each thread mixes > CPU-intensive C functions with callbacks to Python utility functions. > He contends that, even though he releases the GIL in the CPU-bound C > functions, the reacquisition of the GIL for the utility functions > causes unacceptable contention slowdowns in the current implementation > of CPython. Yes, callbacks to Python are expensive. But is the problem the GIL? Instead of contention for the GIL, he seems to prefer contention for a complex object. Is that any better? It too has to be protected by a lock. > If I understand them correctly, none of these concerns are silly. No they are not. But I think he underestimates what multiple processes can do. The objects in 'multiprocessing' are already a lot faster than their 'threading' and 'Queue' counterparts. -- http://mail.python.org/mailman/listinfo/python-list