At 12:16 PM 6/15/2007, Greg Lindahl wrote:
On Fri, Jun 15, 2007 at 09:57:08AM -0400, Joe Landman wrote:

> First, shared memory is nice and simple as a programming model.

Uhuh. You know, there are some studies going where students learning
parallel programming do the same algorithm with MPI and with shared
memory. Would you like to make a bet as to whether they found shared
memory much easier?

only if they have a test and set/semaphore mechanism provided.

One thing I find that's nice about message passing, as a conceptual model, is that the concept of "simultaneity" cannot exist. There's always some finite time between the data existing in place A and the same data existing in place B.

So, if you program in a message passing model, you have to explicitly think about such things. Right from the start you have to deal with issues like "ships passing in the night", and that's a big hurdle from the "one process on one giant block of memory" that most things start out with. Shared Memory is sort of a parallel hardware implementation of the multiple threads in a classic single CPU multithreaded kernel. Instead of context switching all the time, each processor keeps its own context.



James Lux, P.E.
Spacecraft Radio Frequency Subsystems Group
Flight Communications Systems Section
Jet Propulsion Laboratory, Mail Stop 161-213
4800 Oak Grove Drive
Pasadena CA 91109
tel: (818)354-2075
fax: (818)393-6875

_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to