|
I've started thinking about how to make ParallelFuture jive with D's
new threading model, since it was designed before shared and
std.concurrency were implemented and is basically designed around
default sharing. (core.thread takes a non-shared delegate, and allows
you to completely bypass the shared system, and from what I remember of
newsgroup discussions, this isn't going to change.) I've re-read the concurrency chapter in TDPL and I'm still trying to understand what the model actually is for shared data. For example, the following compiles and, IIUC shouldn't: shared real foo; void main() { foo++; } I guess my high-level question that I'm still not quite getting is "What is shared besides a piece of syntactic salt to make it harder to inadvertently share data across threads?" Secondly, my parallel foreach loop implementation relies on sharing the current stack frame and anything reachable from it across threads. For example: void main() { auto pool = new TaskPool; uint[] nums = fillNums(); uint modBy = getSomeOtherNum(); foreach(num; pool.parallel(nums)) { if(isPrime(num % modBy)) { writeln("Found prime number: ", num % modBy); } } } Allowing stuff like this is personally useful to me, but if the idea is that we have no implicit sharing across threads, then I don't see how something like this can be implemented. When you call a parallel foreach loop like this, **everything** on the current stack frame is **transitively** shared. Doing anything else would require a complete redesign of the library. Is calling pool.parallel enough of an explicit asking for "here be dragons" that the delegate should simply be cast to shared? If not, does anyone see any other reasonable way to do parallel foreach? On 7/31/2010 7:31 AM, Andrei Alexandrescu wrote:
|
_______________________________________________ phobos mailing list [email protected] http://lists.puremagic.com/mailman/listinfo/phobos
