== Quote from Sean Kelly ([email protected])'s article > dsimcha Wrote: > > > > This is great for super-scalable concurrency, the kind you need for things > > like > > servers, but what about the case where you need concurrency mostly to > > exploit data > > parallelism in a multicore environment? Are we considering things like > > parallel > > foreach, map, reduce, etc. to be orthogonal to what's being discussed here, > > or do > > they fit together somehow? > I think it probably depends on the relative efficiency of a message passing approach to one using a thread pool for the small-N case (particularly for very large datasets). If message passing can come close to the thread pool in performance then it's clearly preferable. It may come down to whether pass by reference is allowed in some instances. It's always possible to use casts to bypass checking and pass by reference anyway, but it would be nice if this weren't necessary.
What about simplicity? Message passing is definitely safer. Parallel foreach (the kind that allows implicit sharing of basically everything, including stack variables) is basically a cowboy approach that leaves all safety concerns to the programmer. OTOH, parallel foreach is a very easy construct to understand and use in situations where you have data parallelism and you're doing things that are obviously (from the programmer's perspective) safe, even though they can't be statically proven safe (from the compiler's perspective). Don't get me wrong, I definitely think message passing-style concurrency has its place. It's just the wrong tool for the job if your goal is simply to exploit data parallelism to use as many cores as you can.
