On 07/04/2012 12:46 AM, Sebastian Sylvan wrote:

You'd probably want data parallel code to use a cheaper abstraction
than tasks anyway (e.g. no real need to have an individual stack for
each "job" - just a shared set of worker threads across the whole
program that all data parallel work share).

That said, you may want some abstraction for sharing large, immutable,
"database"-type data between long running concurrent tasks too (where
you can't guarantee that all "jobs" have finished by a specific chunk,
it may be completely dynamic, unlike the data parallel scenario).

Yeah. Plausibly either some kind of pool-based abstraction or fork/join (or, yes, CUDA/OpenCL) might sneak in in future versions. We focused on task parallelism at first because, well, because it's my area of interest and I wrote the first cut. MIMD is the most general case, and the CSP model has complementary roles in both concurrency and modularity (tasks being isolated). But various SIMD and MISD flavours are often appropriate abstractions, particularly when a problem is compute bound rather than I/O bound.

-Graydon
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to