After Googling for examples on this, in the Cookbook
http://www.scipy.org/Cookbook/Multithreading
MPI and POSH (dead?), I don't think I know the answer...
We have a data collection app running on dual core processors; I start one thread collecting/writing new data directly into a numpy circular buffer, another thread does correlation on the newest data and occasional FFTs, both now use 50% CPU, total.
The threads never need to access the same buffer slices.
I'd prefer to have two processes, forking the FFT process off and utilizing the second core. The processes would only need to share two variables (buffer insert position and a short_integer result from the FFT process, each process would only read or write), in addition to the numpy array itself.
Should I pass the numpy address to the second process and just create an identical array there, as in
http://projects.scipy.org/pipermail/numpy-discussion/2006-October/023647.html ?
Use a file-like object to share the other variables? mmap? http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/413807
I also thought ctypes
ctypes.string_at(address[, size])
might do both easily enough, although would mean a copy. We already use it for the collection thread.
Does anyone have a lightweight solution to this relatively simple sort of problem?
- Ray
_______________________________________________ Numpy-discussion mailing list [email protected] http://projects.scipy.org/mailman/listinfo/numpy-discussion
