On Nov 4, 4:27 pm, "Andy O'Meara" <[EMAIL PROTECTED]> wrote:
> People > in the scientific and academic communities have to understand that the > dynamics in commercial software are can be *very* different needs and > have to show some open-mindedness there. You are beware that BDFL's employer is a company called Google? Python is not just used in academic settings. Furthermore, I gave you a link to cilk++. This is a simple tool that allows you to parallelize existing C or C++ software using three small keywords. This is the kind of tool I believe would be useful. That is not an academic judgement. It makes it easy to take existing software and make it run efficiently on multicore processors. > As other posts have gone into extensive detail, multiprocessing > unfortunately don't handle the massive/complex data structures > situation (see my posts regarding real-time video processing). That is something I don't believe. Why can't multiprocessing handle that? Is using a proxy object out of the question? Is putting the complex object in shared memory out of the question? Is having multiple copies of the object out of the question (did you see my kd- tree example)? Using multiple independent interpreters inside a process does not make this any easier. For Christ sake, researchers write global climate models using MPI. And you think a toy problem like 'real-time video processing' is a show stopper for using multiple processes. -- http://mail.python.org/mailman/listinfo/python-list