Answers embedded below:
Alaric Snell-Pym <[EMAIL PROTECTED]> writes: > Well, as far as I can tell, MPI seems to have a fairly static view of > the world as having a fixed number of processes in it upon startup, Correct. This helps with communication transport optimizations -- when you create all your processes in the beginning, libraries like Open MPI can figure out if they can do fast local communication between processes that reside on the same physical node. > so the following (in Erlang) comes to mind: > > http://www.erlang.org/doc/reference_manual/processes.html#10.2 > > Which, basically, creates a new process and returns its PID, which is > then usable as the destination for a message send. Ok, you cannot do this with MPI as far as I know. > > Also, looking at the MPI egg, it has functions to send integers, > u8vectors, etc - how do you send *s-expressions*? The MPI egg also allows you to send and receive bytevectors (blobs). If you serialize your Scheme object with the s11n egg, you can pass it around it using the bytevector communication routines. > > MPI seems based around the classic scientific-computing model: you > have a fixed array of processors and pass numeric data between them, > often using a SIMD model. > You can pass any data between them, really :-) And what you have is not exactly SIMD, it's a generalization like map/reduce (which can be easily used for SIMD, of course). > Erlang, on the other hand, works as a bunch of processes (spread over > one or more physical nodes) that are created and destroyed on the > fly, with messages composed along the same lines as S-expressions > (lists of symbols, numbers, other lists, etc) being passed around. I am obviously biased towards MPI, so I don't think it is such an important distinction of dynamically creating processes vs. keeping a static pool of processes. But I do agree that a distributed communication library should be able to support both scenarios. And as I explained above, with Chicken you can pass around any serializable Scheme object using s11n and mpi. > > Now, one could of course build that *on top of* MPI: make each MPI > process be a Chicken unix-level process, and then run a core server Each MPI process is a Unix-level process. > Chicken-thread that listens for messages from MPI and routes them to > the appropriate thread, or notices messages telling it to start a new > thread and handle them. Yes, you do have to design your own high-level protocols on top of MPI. > With the messages being bytevectors that actually contain > s-expressions encoded in utf8 or some such. But MPI itself doesn't > provide the same kind of services as the Erlang concurrency system. MPI on its own no. MPI + a high-level language is a different story, though. _______________________________________________ Chicken-users mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/chicken-users
