On Sun, 2009-06-14 at 15:50 -0500, Robert Kern wrote: > On Sun, Jun 14, 2009 at 14:31, Bryan Cole<br...@cole.uklinux.net> wrote: > > I'm starting work on an application involving cpu-intensive data > > processing using a quad-core PC. I've not worked with multi-core systems > > previously and I'm wondering what is the best way to utilise the > > hardware when working with numpy arrays. I think I'm going to use the > > multiprocessing package, but what's the best way to pass arrays between > > processes? > > > > I'm unsure of the relative merits of pipes vs shared mem. Unfortunately, > > I don't have access to the quad-core machine to benchmark stuff right > > now. Any advice would be appreciated. > > You can see a previous discussion on scipy-user in February titled > "shared memory machines" about using arrays backed by shared memory > with multiprocessing. Particularly this message: > > http://mail.scipy.org/pipermail/scipy-user/2009-February/019935.html >
Thanks. Does Sturla's extension have any advantages over using a multiprocessing.sharedctypes.RawArray accessed as a numpy view? Bryan _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion